Building better makeware — software for people who make things

Observations and suggestions on how we can improve creation software by an ex-Airtable user experience researcher and professional musician.

Caitlin Pequignot
UX Collective

--

This is an abstract, 3D image of a creative person trying to build something with an iPad-type device.
Image by DALL-E.

In the diverse world of “makeware” — software that people can use to build other software or creative works —much of the creation’s success rests on the shoulders of the users themselves. But there are many opportunities for product teams to encourage their success. I’m an ex-Airtable UXR, product strategist, and musician with professional and personal experience improving and using makeware tools. These are my observations on how we can improve this kind of software for everyone that dares to make something new for others.

What is makeware?

Makeware is a term I’m using to refer to products that let people make software, workflows, or creative works (tools like Airtable, Zapier, Webflow, Squarespace, Adobe, or even Garageband). If your company is building a tool that people use to make something for other people, I’m referring to that as makeware for the purposes of this discussion.

Who uses makeware?

While granularities and subgroups certainly exist depending on the use case or industry of specific makeware, we can generally group makeware users into the following cohorts of people, or “proto-personas”:

  • Creators — people who use the makeware to build or create products, tools, or creative works for themselves or others
  • End users — people who use the tools that creators make to do their work, or people who enjoy the content they create
  • Stakeholders — for makeware that produces tools or workflows, people who have some decision-making influence over the makeware and the tools it generates

For the purposes of this discussion, I want to focus in on the needs of creators, since that’s the group on which the success of the makeware tools and the people they serve rests most. Without someone to build with the needs of the team in mind, the creation risks not being used or enjoyed by the people it was made for.

What determines if people are successful with using makeware?

First, we need to define what “success” looks like for people building things with makeware. For some, it might be that the apps, websites, or tools made with the makeware are getting a certain amount of end user engagement. For others, it might be that a process is achieving some meaningful percent reduction in time or money saved. However that success is defined, the creator has to determine what they need to make and how to use the makeware to realize their solution.

Along that journey, I have found there are three key dimensions that affect creator success.

  • The complexity of what they are making
  • The skill or familiarity that they have with what they are making
  • How usable the makeware is

Let’s model this by thinking about a successful makeware building experience as being like a road trip. The complexity of what people are making is the route for the trip. Is it a quick trip to the store (a simple project management app, one-page marketing website, or beginner Garageband song)? Or is it a multi-day trek over rough terrain (a workflow across multiple teams, an enterprise eCommerce site, or an orchestral score recording)?

This is a diagram showing two building goals. One is simple, a straight line with a car above it leading from abstract idea, to trying and building, and then to deployed solution or content, leading to the retail store Target. The other building goal is complex, with a squiggly line leading to Yosemite National Park.
May we all find the simplest road to Target. Image by Caitlin Pequignot, DALL-E.

Let’s use the complex roadtrip here to illustrate the most extreme example of user frustration. The skill or familiarity that users have with what they’re making, in this example, is fairly easy to map — it’s who is driving the car. An experienced driver who has taken many road trips likely won’t have much difficulty navigating the twists and turns of this complex journey — even if they might get frustrated sometimes. But a teen who has just gotten their permit — or Chevy Chase in National Lampoon’s Christmas Vacation — is likely going to be in for a rough journey, and it might take them much longer than it would take the experienced driver to get to their destination.

This is a diagram showing the journey differences between an experienced user and a less experienced user. Vin Diesel from the Fast and the Furious “drives” a car to Yosemite and has no issue. Chevy Chase from National Lampoon has much a harder time.
Good thing Chevy Chase was never chased by Vin Diesel. Image by Caitlin Pequignot, DALL-E.

But wait. Let’s not strand this inexperienced driver and put them at risk. Let’s give them a self-driving car that can make the road trip for them! In this way, we can model the usability of the makeware as the car itself. Chevy Chase operating a self-driving car might have an easier time getting to his destination. But Chevy Chase stuck with a stick shift might never leave the parking lot.

This is a diagram showing the differences between a more usable software vs. a less usable one. In one, Chevy Chase is driven to Yosemite successfully in an electric car. In the other, he is stuck in a valley of troubleshooting with a stick shift car.
Well, he got further than I would have in a stick shift. Image by Caitlin Pequignot, DALL-E.

To bring this back to what product teams can control, it’s important to remember this: we arguably have the least control over the skill or familiarity our users bring to the table. But we can make our cars — our makeware itself — more usable for them. That way, more people can be successful with our tools, regardless of their creation goal.

But what about the complexity or use case of the goal itself? Do product teams have any control over that? To some extent, yes — teams can choose to focus on user segments with more or less complicated creation goals, depending on business objectives and use case prioritizations. Maybe your business doesn’t care so much about serving the needs of racecar drivers. But if a goal of your makeware is to make creation easier for more people, software can be optimized to reduce how complex it is to achieve a complex goal with the makeware itself.

There are a multitude of ways to do this — providing users with templates, AI assistants, and workflow builders are just a few. In general, these types of interventions are referred to as “lowering the floor”, or making useful abstractions for complicated functionality, so that a deep understanding of said functionality is not needed to achieve the same result (Edwards, 2020).

The key to building makeware for the most people is not only to make it more usable, but also to make intelligent decisions with the user, not for them, to get them to their destination faster and with the least amount of frustration.

Allowing users to “tailor” software to better suit their needs — such as allowing them to abstract their goal complexity into steps that are easier to do — is a key aspect to designing applications that users can customize for their own skill level or goal (MacLean et al., 1990). Unfortunately, many makeware products are still inherently hard to use and take time to master, despite efforts to lower the floor. And many people don’t have the skill or time to invest in using the makeware we offer them.

In my time at Airtable, I worked on several projects that endeavored to make app building easier. As a musician and product strategist, I’ve used other makeware tools myself to build products and content for others to enjoy. In both personal and professional spheres of my life, I’ve observed the following product opportunities that I believe are true of software in the makeware space that are worth addressing.

How can makeware be more accessible?

Over the course of my career and personal experience, I’ve observed that allowing people to make useful mistakes, providing helpful stimuli to react to, maximizing for short yet meaningful product experiences, and leaning into rather than fighting established mental models can catalyze a more enjoyable makeware experience. By making experimentation feel cheaper, creation progress more easily recallable, and iterations more proactive, we can give users of all skill levels and goal complexities a more enjoyable makeware experience.

People need to make mistakes to learn

Why it’s important: Iteration is a product mindset but not one we always encourage in our users or even prioritize at work. Without it, people get stuck in the creation process and may not know how to get out.

When I was a math instructor tutoring students at Mathnasium, I didn’t worry that it took a student a few tries to use a number line correctly. With the right scaffolding, a student can use their previous mistakes to reach the right answer later. When I was working as a UXR on a growth team, I didn’t question why we built, ran, and shipped growth experiments — and then design something else if our experiment didn’t go the way we wanted. But as a UXR studying people building things with makeware, I see that this useful loop of try, fail, learn, and repeat — the basic product iteration cycle — isn’t leveraged as well as it could be to help users learn from their inevitable mistakes.

And mistakes are inevitable, especially for more inexperienced creators. While experienced software builders might gather requirements and work with design experts, inexperienced ones may find that both their creation’s requirements and its design emerge from their explorations (Ko et al., 2011). We also see that trial and error remains a key way for people to learn how to use software — more so than reading documentation — due to the fact that doing so feels like “progress” and keeps them moving towards their goal (Masson et al., 2022). So it would follow that a makeware requirement would be to support those creators’ explorations and mistake-making. Unfortunately, this is not always the case.

One way makeware supports trial and error is by allowing users to undo their mistakes, either explicitly in the UI or via an implied knowledge of CTRL/CMD+Z. It may seem obvious, but lots can go wrong here. People can have a hard time finding or intuiting the undo ability, seeking a specific action they took several steps ago, or knowing that the system even undid the action at all.

DialogFlow is a Google product that allows people to build traditional AI chatbot workflows. Unfortunately, it doesn’t handle undo well because it offers an inconclusive system status of whether or not the undo has actually occurred. Here I delete something — a test intent, in this case — and try to get it back first by pressing CMD-Z off-screen, then on-screen by looking for a “trash” in the sidebar. As neither attempt to find “undo” is successful, and my test intent seems to be gone, I concluded that DialogFlow simply doesn’t let users undo their deletion events.

A GIF of the DialogFlow UI is shown here, which shows a left sidebar of main menu options, a list of chatbot intents, and an area on the right where users can test the chatbot. Here, an intent is deleted, but the UI gives no clear answer to the question of if the action can be undone.
DialogFlow, like many builder products, doesn’t show “undo” to the extent that makes users feel comfortable trusting it. GIF by Caitlin Pequignot.

And allowing for undo is really only the bare minimum. In situations where an undone action(s) indicates an exploration and not a simple click error, the interaction is an opportunity for learning and redirection that few products take (though detecting intent like this is certainly easier said than done).

Imagine if DialogFlow were smart enough to have figured out that my deletion of specific training phrases, based on strings or other metadata, was indicative of a common mistake. Its interpretation of my actual goal and proactive suggestion, correct or not, might lead me to a more successful action.

This is a prototype image of a potential DialogFlow redirection. It shows the DialogFlow UI as shown previously, with a modal that says “Test intent” deleted, with a clear blue Undo button. Another modal says, “If you’re making test intent changes, you can do that in the demo center.” This one has a blue button that says, “Try It”.
Note: the “demo center” of DialogFlow doesn’t exist, and neither does my skill as a designer. This is meant to demonstrate a possible redirection only. Image by Caitlin Pequignot.

Versioning and branching are more powerful evolutions of undo that enable the “what-ifs” of learning to occur. A great example of this that I used in my own initial learning of R is the data exploration, visualization, and statistical analysis tool Exploratory. A key value of Exploratory is that it allows you to go back in time to previous steps you may have applied to your data frame.

This is a GIF of Exploratory’s data analysis UI, which looks like a dashboard with many graphs visualizing the data underneath. The right sidebar shows data manipulation steps, like a filter, in the order that the user implements them. I show how a user can easily revert to a previous manipulation by clicking directly on the previous step.
Going back to a previous data frame in Exploratory is relatively easy. GIF by Caitlin Pequignot.

Exploratory’s branching allows users to try out analyses in a non-destructive way. If you wanted to go down an analysis rabbit hole that excluded “unknown” airports, for example, you could go off and do that without having to make a separate file.

This is a GIF of the Exploratory UI showing how a user can branch off a data manipulation step without destroying the underlying data.
You can branch off of your main project file in Exploratory like this. I’ve often wished that I could do this in makeware tools, especially in Airtable, Figma, or my music production tool, Ableton Live. GIF by Caitlin Pequignot.

The workaround for the lack of this kind of functionality is well-known: hacking your way through file duplication as a means of trying out ideas.

This is an image of music files on my computer. There are multiple files, all with slightly different names that show versions saved as separate files.
These are Ableton files — songs I’m working on. Sadly, I don’t know what all of these versions are. Generally, the longer the name, the newer the version. Foolproof heuristic. Image by Caitlin Pequignot.

Unsurprisingly, this kind of file hacking is much more difficult in cloud computing without the ability to save explicit files, but users still find a way.

This is an image of the Airtable homepage, which shows five different bases all with slightly different names, such as Budgets 2023, Budgets V2, Budgets, etc.
Who amongst us is not guilty of something like this? Image by Caitlin Pequignot.

As anyone who has built something that other people use or consume knows, versioning and branching naturally become essential to change management as processes age, business needs change, or creative edits just keep getting more extensive. Getting people used to a forgiving iteration cycle can build trust that the makeware will continue to support them as their needs evolve. And it can make building easier and more enjoyable.

People learn better by reacting

Why it’s important: Even if a suggestion is wrong, it can be a guide that can help users unstick themselves later, or discover something new.

Starting from a blank slate might be a welcoming canvas to advanced users, but it can induce panic in people with less skill or familiarity with the thing they are making. The concept of scaffolding, or completing manageable steps on a journey towards a goal, in learning theory is well-established (Kurt, 2020). Abstractions such as templates, or pre-defined options, are useful starting points because they provide this kind of scaffolding; however, they can be too basic, limiting, or not customizable enough (Arvedsen, et al., 2015).

But imagine a template or predefined option that creators could look at and tweak after giving the system a few instructions in natural language. This could be a more useful way to provide the value of a template with the flexibility of allowing the user to tweak their end result in real time (for more on how LLMs can enable malleability of software outputs, check out Geoffrey Litt’s thoughts on this). Prompt engineering and the process of generating an AI output closer to what you want is actually a good example of how an end state helps someone approach their goal. At the time of writing this article, however, most prompt engineering UIs don’t encourage or help the user get closer to their desired output; users must intuit the changes they want themselves.

Here is a personal example. Prior to the launch of OpenAI’s GPT Builder, I tried to get ChatGPT to talk like Felicity Merriman — an American Girl doll from the Revolutionary War. Throughout this process, I had to iterate a few times to get ChatGPT to avoid anachronisms, sound less like a helpful assistant, and consistently answer in the first person.

This is a ChatGPT conversation, the text input of which reads: pretend you are felicity merriman from the american girl company. i would like to chat with you as if we were connected by a magical device that could let us talk back in time. ChatGPT: Of course! I’d be delighted to step into the shoes of Felicity Merriman, a character from the American Girl series, and chat with you…What era or time period would you like to discuss or imagine our conversation taking place in? [Edited for length]
I gave ChatGPT too much credit, thinking it might intuit Felicity’s year from its knowledge base. I had to be more explicit and later tell it to tone down its default assistant persona. Image by Caitlin Pequignot.

I was able to come up with the edits to my prompt myself, but as the complexity of what we want out of our generative AI partners increases, so too should the scaffolding that helps us narrow in on our objectives for them.

This is another ChatGPT 3.5 image. The text is too long to reproduce here, but in general, it shows an improved version of the AI persona, talking like someone from 1774.
After getting ChatGPT to stop mentioning the century, stop sounding like an assistant, and speak in the first person like an old friend, it took on the persona of Felicity fairly well and handled my roleplaying as a woman in the late 18th century in stride. Image by Caitlin Pequignot.

As I worked on this article, OpenAI announced GPT Builder, which lets people use natural language to create their own GPT assistants. This type of iterative natural questioning leverages the kind of “yes and” iteration that I found necessary to create my Felicity Merriman GPT, and the kind of iteration creators often find necessary to make useful things with makeware. I’ve since used GPT Builder to make other GPTs for myself in this iterative way, including one that helps me find plot holes in my young adult fiction novel.

This is a screen capture from a video of Sam Altman demoing the GPT Builder at OpenAI’s demo day. He stands at a podium with a large screen next to him that shows the GPT Builder.
Sam Altman demos GPT Builder at DemoDay 2023. The kind of questioning that GPT Builder starts with provides something for the builder to react to — no matter how right or wrong. This would have helped unstick me while I noodled over how to iterate on Felicity’s assistant. Image screenshotted from this video.

We also see LLMs moving in the space of being able to construct UI that is more tailored to the user’s goal. If a makeware tool could generate websites, dashboards, or provisional song structures in the conversational, iterative way that Google Gemini is starting to do, how much easier would creation be?

This is a screenshot of Google Gemini creating a UI from natural language. It shows birthday party ideas arranged in a list view on the left with a detailed view on the right. Further on the right, a sidebar shows the code behind how Gemini generated the result.
Generation techniques such as those displayed by Google Gemini could be deployed to generate more useful starting points for creators using makeware products. Screenshot by Caitlin Pequignot of Google Gemini from this video.

I’ve observed and also seen documented that when people get stuck, they often turn to other resources, such as demonstrative YouTube videos, or else they give up on their goal (Masson et al., 2022). One reason for this is because they need to see a similar problem reflected and solved with someone else’s approach. This takes time and motivation to seek out, though, and not all creators will be willing to sink this much effort into using a makeware product. Since this kind of abstraction can be so difficult, especially for users with less skill or experience, it’s important for makeware to provide quicker feedback or proactive suggestions that people can react to or accept, as opposed to forcing them to come up with their end goal on their own.

People’s circumstances don’t always allow for extensive learning

Why it’s important: The asks we make of users with respect to iteration and learning need to be respectful of the time they realistically have to dedicate to learning — but this is easier said than done.

When research participants answer, “I don’t have time”, to a question about “why” they didn’t do something, I find that there is usually a more specific answer they’re not sharing.

Rather, “I don’t have time” really ends up meaning:

“This is too hard to figure out in the X minutes that I have every week to dedicate to this.”

“Our new director is trying to bring on the software they used in their last job (instead of yours), so I haven’t felt like it’s worth it to even try [to learn yours].”

Of course, some external factors aren’t in the domain of things a company can solve. But understanding the window of time that people have available to use your makeware product is important. If a creator typically only spends up to 5 minutes in your product, how can the product team make this 5-minute session the most valuable for the user and for the business? What’s a meaningful unit of work they can get done so that they leave feeling satisfied and not frustrated?

At Airtable, the new user experience team experimented with a checklist-model feature that picked up where newly signed-up users left off in their building session. The idea here was to give new users discrete steps they could take to familiarize themselves with Airtable. We made the experience easily recallable and the steps quick enough to be completed in even just one session, if they liked.

This is an image of an Airtable base with a new user experience modal drawer open on the bottom right. In the modal, there are several steps that users can take to get started with their new base, such as “Create a table” and “Set up the columns”.
Airtable’s first-time base building feature is intended as a reference to help builders along with steps in their base building process over multiple sessions. Image by Caitlin Pequignot.

Success for us meant engaging our target audience further in the building process. But the process of building websites, apps, tools, and workflows can take days, weeks, or months depending on the size of the team or the complexity of the thing being made. Other research that I led suggested that certain audiences were less motivated to sink time into the difficulties of using Airtable than others, especially when they had difficult building objectives.

A challenge for product teams is to realistically answer the question of how much effort a target user group is willing to put into learning the makeware, especially if their skill and familiarity are low. Using that effort as a “reality filter”, teams can then test solutions that can help bridge that gap.

Another challenge that time-poor creators face is the amount of troubleshooting they may have to do to get the thing they’re making that last 30, 40, or 50% of the way to the finish line. I tend to refer to this as the customization wall. Dealing with creation pitfalls, especially after an onboarding experience that makes it relatively easy to get started, can be a jarring experience for creators. Fortunately, the kind of iterative questioning and answering that generative assistants are good at makes me hopeful that they can be of great value during this troubleshooting phase of creating with makeware products.

As a real-life example of a product that’s headed in this direction, I tried out Coda’s new AI assistant. It’s a great first start for brainstorming, especially with text as an input source. When I tried to make a demo UXR repository in it, I got excited that the AI assistant would be able to help me figure out how to link Projects to Assignees. Unfortunately, it’s not quite able to do this yet from natural language alone; or at least, I wasn’t able to get it to work the way I intended it to. Still, the promise of what these assistants can do to help detangle tough concepts or user asks is certainly growing and poised to save users time in the long run.

This is an image of Coda.io, a knowledge management product. Here I have two tables in Coda that I’m trying to link together by asking an AI assistant in the sidebar to link them for me. It’s not able to do it yet, or at least, I wasn’t able to figure it out.
Coda is on an optimistic path to leveraging AI assistants to help with the customization wall. Image from Coda, by Caitlin Pequignot.

Product teams can address these issues by prioritizing experiences that respect the realistic time a user will spend in their product and by leveraging generative AI to help unstick creators in moments of confusion. These interventions and others can ease the time burden on creators using makeware products.

People are rooted in software and interactions they know

Why it’s important: Focusing on how your product is different might help sell your user at the “shelf decision”, aka when they’re deciding whether or not they should choose your product or a competitor’s, but it won’t help the people who have to use it relate it to their past experiences.

Our brains are associative machines. A friend who used to struggle with conversations told me about a breakthrough moment when he realized that having one is “just saying something that’s related to what someone else just said”. When confronted with new software, it’s easy to say, “oh, this acts like X but also kind of acts like Y” in order to understand it. What the user believes about a system — how it will work, what it reminds them of — constitutes their mental model of it (Nielsen, 2010).

However, in their effort to stand out, products sometimes fight this comparison in favor of touting their differentiation. But that can sometimes be to the detriment of the people using it.

I experienced this firsthand as a user of digital audio workstation (DAW), Logic Pro, transitioning to Ableton Live (another DAW). One of Ableton’s differentiating factors is its view that lets artists cue clips of their music in real time, allowing them to jam and explore on the fly. This is a super fun and useful way to make music, but it was tough to learn coming from a mental model where I thought about a song’s lifecycle as moving from beginning to end.

This is an image of Ableton Live’s UI, a digital audio workstation. It shows many different instrument columns with audio or MIDI clips inside them.
Ableton’s session view. Clips can be set to loop or play once, allowing the user to build a song in a non-linear way. Image by Caitlin Pequignot.

Compare that with Apple’s digital audio workstation, Logic, which only has the more traditional “timeline”-like view of making music.

This is an image of Logic Pro, Apple’s digital audio workstation. Unlike one of Ableton’s views, it shows instruments as rows, with their audio regions stretched out over time. This is the more traditional way of showing audio regions.
In contrast, Logic’s timeline view is the more conventional way of making a song in this kind of software. While Ableton also offers a view like this, its session view was one of the reasons I chose to switch music building softwares. Image by Caitlin Pequignot.

In an effort to wrap my head around how Ableton worked, I kept trying to graft Logic Pro affordances onto it, struggling with Ableton’s unique and more flexible session view (cueing and playing sections of your song).

If Ableton had provided an onboarding that explained it in the context of someone coming from a more traditional digital audio workstation, I might have had an easier time migrating to it from Logic Pro. The same kind of opportunity applies to other makeware, especially those tools that look like another tool but may have some key differences to their functionality.

What can makeware products do to make creation easier, faster, and more attainable?

In the course of my career, I’ve observed that people need to make mistakes to learn, and they need options to react to that can help them learn. However, people have other priorities that compete for their time and are rooted in software and interaction patterns that they know already.

I’d be a poor research partner if I didn’t leave this discussion with some recommendations for product teams making these kinds of tools. I believe makeware companies can help their target users by making makeware easier to use and providing building blocks that lower the perceived complexity of what they are trying to make. I believe there is a world where makeware tools are cheaper to play with and learn from, with better multi-session support to help people pick up where they left off, and a focus on providing examples and options rather than forcing people to come up with something for the sometimes abstract decisions they are trying to make.

Here are a few ideas I have that could improve the creation process with makeware tools:

  • Onboarding that uses the terminology of similar experiences, where applicable
  • Multi-session help experiences
  • In-product AI assistants that provide creation options within a conversation or UI that they generate
  • Better “undo” and “redo” system status
  • Proactive redirection or suggestions after an undo action
  • Branching so people can play with options non-destructively

By making people’s difficult building goals easier to attain, and by improving the usability of our own products, we can empower more people to be successful using our makeware. That way, more people can make websites, workflows, apps, software, and even songs, more easily.

References

Edwards, L. (2020, March 3). Floors and ceilings. Lee.af. https://lee.af/floors-and-ceilings/

MacLean, A., Carter, K., Lövstrand, L., & Moran, T. (1990). User-tailorable systems: pressing the issues with buttons. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Empowering People — CHI ’90. https://doi.org/10.1145/97243.97271

Ko, A. J., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., Scaffidi, C., Lawrance, J., Lieberman, H., Myers, B., Rosson, M. B., Rothermel, G., Shaw, M., & Wiedenbeck, S. (2011). The state of the art in end-user software engineering. ACM Computing Surveys, 43(3), 1–44. https://doi.org/10.1145/1922649.1922658

Masson, D., Vermeulen, J., Fitzmaurice, G., & Matejka, J. (2022). Supercharging Trial-and-Error for Learning Complex Software Applications. CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3491102.3501895

Kurt, S. (2020, July 11). Vygotsky’s Zone of Proximal Development and Scaffolding. Educational Technology. https://educationaltechnology.net/vygotskys-zone-of-proximal-development-and-scaffolding/

Arvedsen, Mikkel & Langergaard, Jonas & Vollstedt, Jens & Obwegeser, Nikolaus. (2015). Chances and Limits of End-User Development: A Conceptual Model. 223. 208–219. 10.1007/978–3–319–21783–3_15.

Litt, G. (2023, March). Malleable software in the age of LLMs. Www.geoffreylitt.com. https://www.geoffreylitt.com/2023/03/25/llm-end-user-programming

Nielsen, J. (2010, October 17). Mental Models and User Experience Design. Nielsen Norman Group. https://www.nngroup.com/articles/mental-models/

--

--

Senior UXR and product strategist, ex-Airtable. Professional violinist and short prose poem enthusiast. caitlinpequignot.com