AI hallucination, Olivia Rodrigo tickets, better type, UI color ramps

Weekly curated resources for designers — thinkers and makers.

Fabricio Teixeira
UX Collective
Published in
3 min readOct 2, 2023

--

“Over the past months, AI tool makers have been working hard to limit AI hallucinations. For large language models (such as ChatGPT), fighting hallucinations means making sure the chatbot won’t cite fake science publications or judicial decisions. For image generation, it means avoiding dreamlike and uncanny images. As we are putting more trust in these tools, this makes a lot of sense. Especially in a factual use: when writing an article or generating stock photography you would not want to let the AI wonder too much.

But in creative use, when the images and the words are meant to open up possibilities, when you are looking to produce something new, these tools feel less useful and harder to use. Did the poetry disappear with the hallucinations?”

The case for AI hallucination
By Louis Charron

Editor picks

The UX Collective is an independent design publication that elevates unheard design voices and illuminates the path to design mastery and critical thinking. Here’s how we’re boosting stories through our partnership with Medium.

Make me think

  • Medium won’t let AI train using your writing
    “We are doing what we can to block AI companies from training on stories that you publish on Medium and we won’t change that stance until AI companies can address this issue of fairness. If you are such an AI company, and we aren’t already talking, contact us.”
  • Ending the seat at the table debate
    “When was the last time you asked your CEO what the top 3 problems in the product were? When was the last time you went through Play Store reviews and user interviews to surface the 5 biggest problems your company must fix in the product?”
  • The sterile world of infinite choice
    “Always being able to watch what you want in your bed by yourself hasn’t made us more interested or creatively satisfied or happy. What have we traded — in privacy, in time, in freedom, in surveillance — in the name of theoretical safety and ultimate choice?”

Tools and resources

Support the newsletter

If you find our content helpful, here’s how you can support it:

--

--