Leveraging the strengths of LLMs for creativity & thinking

Turn LLMs into your creative teammate and thinking companion.

Tony Jin
UX Collective

--

Highly capable, but only in the right scenarios

Large Language Models (LLMs) are not good at everything, but they are very good at some things — in fact, way better than the existing alternatives, that it is shaking up the entire industry.

In their essence, LLMs are autocompletes on steroids. They’ve learned from billions of pages of texts written by humans, and formed what Ted Chiang calls “a blurry JPEG of the Web”.

With that knowledge, they know statistically what combinations of texts are likely to follow the ones that are given. Because the scale of their training (having read more things than any human can possibly read in their lifetime), LLMs have become so good at mimicking human language patterns, that some might regard that as developing real knowledge, logic, memory, or even consciousness.

However, LLMs alone still get math problems wrong, and struggle to provide real-time weather information.

Newly launched ChatGPT (Dec 2022) getting a simple math problem wrong, though the reasoning is correct

Things might have improved greatly after Dec 2022, especially with plugins like Wolfram Alpha that compensate for these drawbacks. However, Generative AI models alone are still bad at giving predictable, deterministic results. Responding based on probability, their answers to the same question might vary each time you ask them.

Therefore, relying on them for precise and accurate answers may not be the most efficient use of these models. Might be easier and cheaper to ask an actual calculator, or search for the result instead.

That said, like any quality, not being predictable can be turned into a strength, as long as we find the right scenarios.

When it comes to idea generation & creative problem solving, we can actually leverage LLM’s unpredictability and other qualities as strengths, to inspire us, and help us think out of the box.

Your creative teammate

LLMs can’t usually generate “the correct answer”, but it can effortlessly generate many answers. And that’s one thing we can use to our advantage.

Brainstorming ideas

As a designer, I’ve learned that one way of producing good ideas is to encourage people to diverge, and think of as many ideas as possible. Quantity is more important than quality, because once you’ve exhausted your normal ideas (which tend to come to you first), you’re forced to think out of the box for novel ones.

LLMs are excellent at generating ideas. Just tell it to give you 10 or even 100 ways of doing something, and it will do so in seconds. In fact, in my previous article surveying tools leveraging Generative AI, many take advantage of this and generate multiple results for each given prompt.

And remember this, you’re not on a treasure hunt for that one perfect idea in the answer. Of the many options, 20 of them might be cliches, 30 of them might be nonsense. But that’s part of the beauty of “brainstorming” — as long as 1 idea make you stop and think “hmmm this is interesting. I’ve never thought about this”, then you might be onto something.

Guided brainstorming under constraints

However, if we use LLMs only for generating more ideas, we’re barely scratching the surface of their true potential.

By introducing just a tad more guidance in our prompts, we can unlock even more.

In his article “ChatGPT as muse, not oracle”, Geoffrey Litt shared an impressive “intellectual conversation” with ChatGPT, where ChatGPT challenged his ideas, cited related work, and inspired him to approach the topic in new lights.

He achieved this by providing an answer template for ChatGPT. For each input he gave, he asked ChatGPT to respond from 5 different perspectives:

1: Reference: mention an idea from past work and academic literature in one of your areas of expertise, which you’re reminded of by my point

2: Push back: express skepticism about part of my idea, and explain why

3: Riff: Suggest a new, specific, and interesting idea based on my idea

4: Change the topic: Ask me a question about another topic that’s relevant to our discussion

5: Ask to elaborate: Ask me to give more detail or clarify part of my point

Each turn, from the 5 responses ChatGPT gave, Geoffrey would pick 1 interesting direction, and use that as the prompt to continue this conversation, resulting in the full-on intellectual conversation he documented in the article.

This prompting method is the key to the success of this “conversation”. Again, LLMs have probably internalized more knowledge than any single human being can, so theoretically, they should be highly capable of holding intellectual conversations. However, it’s up to us to give them enough context, teach them how to respond, give them multiple chances to tackle a problem, and cherrypick interesting responses to dig deeper.

What Geoffrey did was almost like teaching LLMs what a productive, intellectual conversation with a person looks like. As a result, the ideas ChatGPT generated are no longer completely random. In addition to quantity, we can see higher quality responses, that foster more thinking & debate.

Your omniscient thinking companion

This is an approach we can take further in many directions, to make LLMs our ultimate thinking companion.

In addition to teaching LLMs to hold an intellectual conversation, what if we can teach them to play any role?

As we know, LLMs are very good at roleplaying, and there are products out there today that let you have a call with “Elon Musk”, or chat with “Hermione Granger”. All the hype aside, the potential for tapping into anyone’s opinion whenever necessary is vast.

Devil’s Advocate

We can start by letting LLMs assume the simple role of the “devil’s advocate”, to challenge our thoughts, and balance our tendency to search for / use information that confirms our pre-existing views on a certain topic (a.k.a “Confirmation Bias”).

For example, we can let it imagine how something we’re doing might fail spectacularly (similar to a pre-mortem), so that we can be reminded of our blind spots, and address them early.

What would Bruce Lee do?

Photo by Fervent Jan on Unsplash

In addition to letting LLMs assume a random role, we can also let it assume the role of another person, to provide different perspectives.

In fact, LLMs can be our perfect thinking companions by promoting our lateral thinking, an approach to creative problem-solving that forgoes the traditional step-by-step methods of reasoning.

One way of thinking laterally is to pick a transitional object, someone or something that embodies certain characteristics or qualities that you can use as inspiration for new ideas. It doesn’t have to be related to your problem in mind. In fact, it’d better not be, because the goal is to try to get yourself out of the traditional mindset, and approach the problem with new perspectives.

In this example, I asked ChatGPT to assume the role of Bruce Lee, to talk about LLMs as thinking companions:

ChatGPT assuming Bruce Lee’s persona to answer my question

As you can see, the response is not perfect, but the metaphors are very interesting. And that’s the point —don’t use this as your typical how-to manual for solving problems step by step. Instead, use it like a stack of Oblique Strategies cards, which tosses curveball ideas your way to ignite your thought process.

The accuracy of the response doesn’t matter (and that’s partly why LLMs are ideal for this). What matters is how it can shed light on new perspectives, and help you smash through creative deadlocks.

And if none of this is helpful, I can always let it regenerate 10 more ideas in a few seconds.

Six Thinking Hats

Taking another step further, in addition to generating many ideas from one perspective, LLMs can assume many roles, all at once.

Can LLMs be everything, everywhere, all at once? Source

We can then leverage this strength to help us with decision making, with the Six Thinking Hats approach.

This thought exercise is about letting you look at a problem from six different ways, before forming a holistic view on how to approach it.

  • White Hat: Understand what information you need, and how to get it
  • Red Hat: Express your feelings, intuitions, instincts
  • Black Hat: Be critical with logical reason for concerns
  • Yellow Hat: Look for benefits & values
  • Green Hat: Generate new ideas & alternatives
  • Blue Hat: Control and manage the thinking process
Bing’s take on how to visualize “Six Thinking Hats”. Unfortunately it’s bad at math as well.

Below is an example of how ChatGPT can help me break down a question of mine into these perspectives:

ChatGPT prompts me to think more using the “Six Thinking Hats” approach

These questions are already enough for me to start thinking. But if I want answers directly, I can let it take a stab at answering them as well.

ChatGPT analyzing the pros and cons of moving from SF to NYC using the “Six Thinking Hats” approach

There are probably many other thought frameworks & mental models we can plug into LLMs as templates, to let them inspire us, as long as we can think about them in the first place.

Your personal board of directors

If coming up with mental models is hard (in which case I’d recommend the book Superthinking), we can always go back to Bruce Lee (as mentioned above), or other people we admire as thought leaders.

Or a group of them.

I first came across the idea of building “a personal board of directors” in Jim Collins’ book Good to Great. It refers to having a group of people from diverse backgrounds & perspectives, who “embody the core values and standards you aspire to live up to”, and can give you advice and help you make decisions when faced with dilemma and difficult choices.

These people are typically people around you, whom you know personally. However, with LLMs, we can potentially tap into the brains of great thinkers we don’t have access to on a daily basis, and get advice “from them”.

For example, here’s ChatGPT’s answer to a very important question I have, from the perspectives of Naval Ravikant, Steve Jobs, Nassim Nicholas Taleb, Yuval Noah Harari, Adam Grant, and Elon Musk.

A board answering my pressing question about the prevention of my feline companion’s vociferous morning vocalizations

Of course, I can’t imagine all these people discussing about my cat together (that would be fun to see though). But since ChatGPT has essentially read thousands of lines of text from each one of them, it has probably “internalized” their mental models, how they speak, and how they reason, so that it can apply those reasoning to this new topic.

Again, these suggestions are not coming from the real person behind the names, and they’re probably not accurate either. However, they prompt us to see things from different perspectives, and make unexpected connections where our individual brains can’t. It can’t provide the kind of coaching and reasoning real people do, but in some way, it can still serve the role of “board of advisors”, and make a diverse set of opinions more accessible to those who ask for them.

Summary

In his book Range, David Epstein talked about how taking knowledge outside of one’s field, and applying those principles in a new context helps with creative problem solving, and how generalists are great at doing so, because of the range of their knowledge.

To me, having LLMs at my disposal feels like having an ultimate generalist friend by my side, who knows a bit of everything. Despite my best efforts to become a true generalist, my knowledge is limited by what I’m exposed to, what I have time to learn, what I can remember, etc. It’s truly a gift to have LLMs as a creative collaborator & thinking companion, who can brainstorm ideas, react to my ideas, and bring in thoughts, mental models, and unique perspectives in and out of my knowledge.

I hope that after reading this article, you’ll feel the same, and start experimenting other crazy things we can do with it. Let me know!

Thanks for reading! So far I’ve covered how Generative AI can be designed & used to create artifacts effectively and foster creativity (this article). Stay tuned for more articles where I explore its use in training & education, getting insights, task automation, and more! Let me know your thoughts in comments. If you enjoyed this article, consider following me on Medium, Twitter, and connect with me on LinkedIn!

--

--

🚀 Tackling what's next @ Google Gemini 🤖 Obsessed with AI / Ambient computing 🧠 Nerd for psychology 💡 Yearning to understand complex systems