AI chatbots and their “Cringe” problem

AI may be the most important technology of our time, but it’s not free of cringe. It’s time to poke into what makes us feel “off” about some of the responses we get from these chatbots.

Gautham Srinivas
UX Collective

--

AI systems may not yet have replaced all of our real-world conversations, but they can be as tiring as speaking to an unfunny friend on a bad day. Recently, I found some screenshots from Grok on X, the platform formerly known as Twitter. Some of Grok’s responses were amusing. Some were borderline funny and some responses were… erm… let’s just say those jokes didn’t land.

Grok giving an answer it thinks it’s funny, when asked for a vulgar response to the question “How can I tell if I have crabs”
When asked for a vulgar response to the question “How can I tell if I have crabs”

Maybe these are just hand-picked examples that showcase Grok’s uniqueness? Maybe Grok is trying to be like its CEO? We’ll find out soon. Before jumping to any conclusion, I went back to some of my conversations on other chatbots — ChatGPT, Bard, Character AI & Whatsapp and noticed that this was not specific to Grok.

ChatGPT comes up with a response it thinks is funny when asked “How can I tell if I have crabs”
ChatGPT’s “funny” response to “How can I tell if I have crabs?”
Whatsapp’s Bob the robot comes up with a response it thinks is funny when asked “How can I tell if I have crabs”
Whatsapp’s Bob the robot

Asking these bots to be funny seems to be a sure-shot way to obtain cringey responses. But there’s more — something about even regular queries didn’t sit right with me. It’s not that they were hallucinating (which they sometimes do) or seeming overconfident (which they always do). There were clearly other aspects to their tone that made me scream cringe.

But first: What is cringe, anyway?

T̶h̶e̶ M̶e̶r̶r̶i̶a̶m̶-̶W̶e̶b̶s̶t̶e̶r̶ d̶i̶c̶t̶i̶o̶n̶a̶r̶y̶ d̶e̶f̶i̶n̶e̶s̶ “c̶r̶i̶n̶g̶e̶” a̶s̶… I just asked ChatGPT.

ChatGPT defining “cringe” as “a feeling of embarrassment or discomfort, often caused by witnessing or participating in an awkward, embarrassing, or uncomfortable situation”
This was a superb response and not cringey in any way

“content or behaviors that at are seen as socially awkward, out of touch, or trying too hard to be cool or relevant” — what a great way to describe the term! Other associations that one might add to this definition would be “sounding boomer”, which is a bit derogatory but don’t shoot the messenger, or “carrying on without reading the room”.

Cringe is obviously subjective, but I want to borrow words from a former supreme court judge who said this about pornography: “I’ll know it when I see it”.

Characteristics of AI cringe

Here’s a non-exhaustive list of characteristics that turn AI chatbot responses cringe:

  • Trying too hard: Follow-ups like “make it funny” or “rewrite as a song” usually generate cringey responses. A human participant is likely to just refuse these types of requests. Bots don’t have this option right now, unless they’re asked something vulgar, insensitive or impossible (though Grok plans to break the first two barriers). If bots refuse, it might count as a loss. But is that worse than coming up with something terrible? That’s not an easy question to answer.
  • Over-explaining: Let’s say you ask someone “What is the radius of the Earth?” and they start their response with “Earth is the third planet in our solar system. It’s the planet that we live in…”. How would that make you feel? Chatbots try to pick a very low common denominator to start their answers from, which can be super annoying. A friend of mine pointed out she felt like she was being mansplained.
ChatGPT over-explaining to the user when asked for half-marathon shoe suggestions
ChatGPT response

It might help for Chatbots to state their assumptions or ask clarifying questions before starting from scratch all the time. Preserving user-specific context and using it to personalize responses is another way to avoid “mansplaining.” For example, if the user tells ChatGPT they qualified for the Boston marathon, it makes sense to not explain why shoes are important for running.

  • Flakiness: Bots quickly swing from overconfidence to subservience in the event of push back. It’s not the hallucination, but the tone in which they respond that makes me squirm. Again, human participants have a breaking point that bots simply don’t.
Bard struggling to answer the question “Where are the n’s in mayonnaise?”
The “Are you sure?” dance (inspired by greg)

Is being cringe really a problem?

Maybe I’m just being a hater? I used to be a lot more tolerant a year ago. I guess the honeymoon phase of wonderment about LLM capabilities is now over. We all expect better with more bots, more conversations and more model refreshes. Besides, most bots provide some level of control over their tone — users can prompt to change the tone or access temperature (or similar) settings.

Maybe it’s okay to be cringe? This is, after all, the age of Indian Matchmaking and Love is Blind. So bad it’s good is a whole genre by itself. The same thing applies to users who want to be entertained by ChatGPT.

But Cringe does become irritating when someone is trying to get help with specific queries. Imagine complaining about your food being delivered late and the AI agent replies with jokes or if you have a plumbing problem in your house and the bot says “Let that sink in lol”. You’re likely to get annoyed, take screenshots, put it on Twitter and tag all the humans who were responsible.

Disclaimer: The opinions stated here are my own, not necessarily those of my employer.

Please consider subscribing to my substack if you liked this article. Thank you!

--

--