Sinbad had his 🦜 too: reclaiming the design of digital artifacts with ChatGPT-powered users

ChatGPT’s advanced neural network allows it to generate human-like responses with remarkable speed and accuracy, making it a game-changer in today’s rapidly evolving technological landscape. In the Critical AI studies, there’s a discussion around LLMs being akin to ”stochastic parrots”[1]. These models, while powerful and transformative, often mimic and replicate the data they were trained on without genuine understanding, much like a parrot. But envision a scenario where we could leverage the parroting ability of these models for user empowerment and active engagement in design. Imagine a parrot that is not merely a mimic but a collaborator in the crafting and personalization of digital artifacts, reminiscent of the legendary Sinbad’s wise parrot.

Mahan Mehrvarz
UX Collective

--

a poster of old 1979 Sinbad’s adventures cartoon
Sinbad and his Parrot

The impact

ChatGPT, as an instance of Generative AI models, has been creating a buzz as it has the potential to disrupt multiple industries including information technology, investment, and creative professionals. The AI tool’s human-like responses and ability to assist with tasks like research and content creation have caught the attention of investors. Meanwhile, the impact of ChatGPT in education has raised questions about enhancing learning and also the potential for cheating. Although AI has always been manifested to be the solution for doing mundane tasks, ChatGPT is indeed a proof-of-concept for a generation of AI tools that can assist creative professionals at another level. ChatGPT can aid in research and editing, allowing writers to focus on their creative process. It is a testament to the rapidly evolving technological landscape and its potential to shape the future.

AI meets design and development

Communities of design practice continue to speculate about how new instances of Generative AI models like ChatGPT can disrupt the design industry, specifically those concentrating on the design of digital products. There is a concern within the design community that certain jobs may be greatly impacted by the rise of reliable generative models as well as hopes that conversational interfaces can disrupt and enhance the user experience.

Meanwhile, a few years ago, OpenAI developed another generative model called “Codex,” which is designed to generate computer code based on prompts.

First demo of Codex in Aug 10, 2021

Although Codex was clearly aimed at disrupting technology development occupations, it was not widely explored until the release of ChatGPT and the subsequent increase in people asking software engineering and programming questions from ChatGPT. Open AI recently stated that since their “GPT-3.5 turbo” outperforms Codex, they discontinue working on Codex and merge the models models. Is the emergence of such models, capable of developing computer code based on prompts has any significance in the design discipline? Other than reshaping the arrangement of development teams and disrupting the programming profession, can these models change something fundamental about design, development and use of digital products?

Empowering users with ChatGPT (LLMs)

The ability of LLMs to create or modify computer code can certainly create new norms in the setup of development teams. However, what is less explored is how models like ChatGPT make the development of digital artifacts accessible to non-programmers. With LLMs, non-programmer users of digital products can be involved in the iterative development of these products and services more than ever. LLMs, with their ability to mediate between source codes and users, have the potential to offer increased customization and reconfigurability is a situation similar to the what Sinbad’s parrot performed within; it was a collaborator and advisor but always servant and in control of Sinbad who had greater understanding of the circumastances. This can be an end to the domination of technical experts, with certain worldviews and intends, within the realm of designing everyday products. It empowers end-users to personalize their daily products that better meets their needs and addresses issues of usability as well as fairness and inclusion.

Open Artifacts: An alternative future of digital artifacts

lets imagine that we could talk to ChatGPT about the everyday products we use and with its coding capabilities (of course not there yet), it can more or less manipulate any given source code of our everyday artifacts: popular digital products like Spotify, Zoom, Youtube, etc.

An screen shot of the fictional conversation below, within the Open AI CHatGPT interface UI.
Fictional conversation between user and ChatGPT in the OpenAI interface

User: Hey ChatGPT, these new tracks on my Spotify ain’t vibe-in’ with me. Think it’s ’cause peeps been blastin’ their tunes on my device. You think you could tweak the app to ignore what went down last weekend?

ChatGPT: Yes, it can be done. Can you recall approximately when your Spotify was being used to play music? Please also provide your Spotify account username and password. Before making any changes to the app, I want to go over the modifications with you.

The ability to modify ML models to exclude certain parts of training data has been a recognized discussion when it comes to designing systems sensitive to values such as transparency and autonomy. AI-assisted features often assume that all data footprints of user interactions will lead to better customer experiences. However, depending on the situation, this assumption might be incorrect. Here, LLMs can play the role of a a mediator between the algorithm and the user to negotiate the situational reconfigurations based on specific user preferences.

An screen shot of the fictional conversation below, within the Open AI CHatGPT interface UI.
Fictional conversation between user and ChatGPT in the OpenAI interface

User: Hey ChatGPT, can you help me sort out the blur effect in my Zoom calls? It’s messing up and blurring part of my hair, making me look like I’ve got a head scarf on. I think it’s confusing my hair with something in the background.

ChatGPT: Sure, I can help with that. Also, I can arrange for your car navigation to use Waze data, if that’s what you prefer. However, keep in mind that Waze data is crowdsourced, so it may not always be the most dependable

An screen shot of the fictional conversation below, within the Open AI CHatGPT interface UI.
Fictional conversation between user and ChatGPT in the OpenAI interface

Me: Hey ChatGPT, I’m a left-handed user and this note-taking app I’m using seems to be designed for right-handed users. Most left-handed people would want a mirrored interface, but I’ve actually found that I prefer using right-handed interfaces. However, there’s one feature that’s really bothering me: the scrollbar is on the right side of the screen and it’s hard for me to reach. Can you tweak the app’s source code to move the scrollbar to the left side?

ChatGPT: Of course! I can adjust the app’s source code to move the scrollbar to the left side, making it more accessible for you. This would involve identifying the scrollbar module in the source code and modifying its position parameters. Please provide me with your credentials for the note-taking app, and I’ll make the necessary changes.

The practice of creating detailed user personas has been a cornerstone in the design of digital products. Yet, these personas, however comprehensive, are inherently unable to capture the full spectrum of user diversity and individuality. This is another place that the potential of AI tools like ChatGPT becomes evident. They offer the possibility for users to tailor digital products to their unique needs, effectively transcending the constraints of predefined personas. This capability to customize is not just a feature — it’s a paradigm shift, a step towards a digital world that caters to all users, not just those who neatly fit into our predefined categories.

Justice and fairness in the age of AI

Justice and fairness are controversial aspects that have yet to be fully addressed within the development digital products and specifically machine learning. One popular argument is that most products are built by ableist, normalist, and universal assumptions, which do not provide appropriate responses for people with different physical abilities, identities, appearances, and cultures. Looking from a pragmatic lens, It is very likely that the game-theoretic dynamics of many powerful economies are the the main obstacles to creating fair and just digital products because companies often have to prioritize profit and to hit that goal they usually [and maybe naturally] decide to optimize for larger, wealthier, industrialized customer segments [2].

However, it is plausible that with the rise of future LLMs like ChatGPT, users won’t have to face the consequences of socio-economic assumptions of technology designers, simply because they are the ones who get to code (or understand technology better). End-users can intervene and design their hyper-persolized experience that feels more justice-oriented to their situated use-case scenarios.

ChatGPT-powered users vs. ChatGPT-powered products

The ability of ChatGPT to generate computer code has the potential to disrupt the design industry on a fundamental level and change the way technology is developed and shipped. However, rather than being implemented in the products (as interface, business logic, or a certain APIs), ChatGPT can serve as a third-party assistant for the users, standing on their side and be the modern-day Sinbad’s parrots. This shifts the power dynamics, giving users more technical agency and strength to go beyond only using the products and start negotiating with the designers and technologists. They can personalize their digital experience. When facing issues of usability as well as fairness, users can go beyond mere disengagement, and take more active roles by resisting, re-configuring, manipulating unusable, unfair, or unwanted artifacts.

a schematic illustration showing how LLMs like ChatGPT can be integerated to the user part rather than the product part only
LLMs in the side of users

In this alternative development path, I position large language models (LLMs) as user advocates, perhaps a mix of lawyers and programmers. This approach, that is maybe rooted in the notion of adversarial design, transforms everyday artifacts into negotiation arenas between users and designers/technologists. It disrupts the conventional power dynamics, where technologist have more say about how things should be designed only because they know how to code. It challenges the traditional understanding of industrial design process and advocates for a post-industrial perspective that value democratic approaches. The emphasizes continuous contestantation and the creation of spaces where political, social, and cultural concerns can be expressed and engaged within the very intertwined design, development, and use processes.

[1] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

[2] Although there is not a consensus about the primary reason behind most justice and fairness problematic sides of digital artifacts, I want to avoid digging deeper on it in favor of any specific understanding of the issue, because it is outside of the scope of this article. However, ideals similar to Sasha Costanza-Chock’s “design justice”, although elaborately explores the notion of justice within the design discipline, are far from a tangible design direction that can be followed and, to me, seems more like a instrument for political debate, rather than a material for everyday design practice. On the contrary, I like how Daniel Schmachtenberger on the the Green Pill Podcast talks about psycho-social externalizes of technologies. His perspective seems to be considerate of human biological and evolutionary constraints and with a broader sociology-economic focus with pragmatic lens for develop the status quo.

--

--