How to improve Experience Design by managing cognitive biases

Cognitive biases are systematic errors in thinking that can lead UX professionals to make wrong design decisions. Here, we will learn to identify and manage three cognitive biases to improve data-driven design outcomes and user delight

Marina Shapira, Ph.D
UX Collective

--

An illustration of two human heads with the words “beliefs” and “assumptions” written in them. The word “facts”, surrounded by questions marks, floats between them.
Illustration by the author

Highlights:

  • Cognitive biases are subconscious errors in thinking that impact individuals and teams in all levels of seniority. If UX professionals and design leaders wish to understand their customers and make effective business decisions, they should first understand their cognitive biases in customer data analysis and interpretation.
  • The Curse of knowledge, Halo/Horn effect, and the Dunning-Kruger effect are examples of common biases that can compromise experience and service design.
  • An overarching method to manage cognitive biases is triangulating insights obtained from a variety of research methods.
  • More specific “de-biasing” methods include the use of machine learning to analyze customer feedback, conducting contextual inquiry research (e.g., “a day in the life” study), and investing in customer onboarding.
  • Open communication about cognitive biases across ranks, along with consistent action, will help organizations optimize their decision-making and create products and services that provide value, delight customers, and increase advocacy.

Only 7% of companies are excellent at using technology and insights to inform their Customer Experience (CX) and design decisions (2). Why? One reason could be related to cognitive biases in customer data analysis and interpretation.

In people’s everyday lives, cognitive biases function as rules of thumb that help them make decisions on the fly without being paralyzed by options. However, cognitive biases can also lead to errors in judgment and decision-making, particularly in business situations that require deliberate and rational thinking (3).

“What makes the bias particularly pernicious is that we all recognize this bias in others but not in ourselves.” — Richard Thaler

Scientists have long known about the potential negative impact of cognitive biases on data collection and interpretation and accordingly have developed methods to minimize it. However, these scientific best practices are not routinely used in CX organizations or UX research teams, an oversight that exposes businesses to systematic errors in measurement, analysis, and decision-making.

Recent industry reports show that 87% of business leaders point to CX as their top growth engine in 2022 and that global investment in CX technology is expected to reach new heights of $641 billion (4). Therefore, practitioners must learn to identify and manage the impact of cognitive biases to ensure return on investment in experience measurement.

This article will cover three biases, rarely spoken of in the industry, and suggest ways to mitigate them.

What are cognitive biases, and how do they impact experience measurement?

People make 35,000 decisions a day under conditions of uncertainty, limited knowledge, and a constant bombardment of information. About 95% of these decisions are unconscious, owing to the brain’s utilization of mental shortcuts and rules of thumb that enable normal daily function (3). At the same time, these mental shortcuts can bias judgment and lead to irrational or wrong decisions.

For example, CX and UX teams might be impacted by confirmation bias, a tendency to see patterns in data that support previous assumptions and opinions. As a result, researchers might subconsciously collect and analyze data to affirm their prior beliefs, resulting in skewed reporting to stakeholders. By communicating prior opinions and assumptions, leaders might inadvertently put pressure on the organization to provide supportive evidence.

Another common bias in business is the sunk cost fallacy, when further investments are made in a failing project to justify the earlier investments, often leading to even greater losses. For example, continuing investment in CX programs or design strategies that do not yield the expected results.

Learning that many of our decisions are not rational and impacted by unconscious factors can be unnerving, especially when it comes to business decisions. The good news is that cognitive biases are predictable and systematic — they are like a clock that’s always 5 minutes late or a scale that always adds two pounds. This means that practitioners can learn to identify and mitigate common biases to inform more effective actions and optimize their decision-making.

The first bias we’ll explore is the Curse of Knowledge.

The Curse of Knowledge

“The Curse of Knowledge: when we are given knowledge, it is impossible to imagine what it’s like to LACK that knowledge.” — Chip Heath

The Curse of Knowledge is a cognitive bias that makes people assume others have the same level of knowledge as them about a particular topic. For example, a design team might consider a customer journey to be easy and intuitive because they are deeply familiar with it. However, customers who encounter the journey for the first time might perceive it as confusing.

A famous experiment conducted at Stanford (5) demonstrated the Curse of Knowledge bias through a simple game. Participants were assigned one of two roles, “tappers” or “listeners.” Tappers were asked to tap the rhythm of a simple and familiar song (e.g., Happy Birthday), and the listeners’ task was to identify the song. Before the experiment began, researchers asked tappers how likely listeners were to recognize the song, and the average estimate was 50%. The results of the study were in sharp contrast to this estimate: listeners recognized the song only 2.5% percent of the time. The gap between the estimated and actual song recognition rate shows how difficult it is for people to disassociate their knowledge from something they already know and take someone else’s perspective.

The main UX issues the Course of Knowledge might cause are:

  • Low discoverability of important website functions or information
  • Unintuitive onboarding flows
  • A steep learning curve of a system or website’s functionality

In a Business to Business (B2B) environment, the Curse of Knowledge might be particularly problematic because professional users are often assumed to be “super users” who are more likely to troubleshoot issues independently. As a result, teams might overlook usability problems or gaps in the customer journey. In reality, B2B users seek ease of use as much as consumers do, and if they struggle with a particular system, they will abandon it and find a workaround instead of wasting time on a product or service that is hard to use.

A gap between the company’s and the customers’ perception of an experience might lead to compromised ease of use. Multiple studies have shown that the ease of use of digital products is linked to key Experience metrics such as satisfaction, advocacy, and retention. Learning to mitigate the Course of Knowledge bias using the following methods will help ensure that customers, and businesses, are not negatively impacted.

An example of the Course of Knowledge in action. A customer calls a call center because they are confused with the website, and the company’s representative thinks the website is easy to use because of prior knowledge.
Illustration by the author

1) Practice perspective-taking, not just empathy

Empathy is the ability to understand and share in the feelings of others. In UX research, empathy is extended to consider customers’ needs and motivations as they relate to products or services.

Perspective-taking is a similar concept that involves “stepping into someone else’s shoes.” However, the emphasis here is on cognitive states versus emotions. Perspective-taking is about considering how others perceive the world with their unique cognitive abilities in specific contexts of use.

Perspective-taking failures are common in the daily use of products and services. For example, banks and insurance companies often ask customers to provide numbers and codes without clearly explaining what they are. My insurer’s website, for instance, requires a “plan number” and a “policy code.” By the time I figure out which is which, my session times out. Parking meters, as another example, are notorious for poorly directing customers where to input the credit card and where the ticket goes. Or, think of the many apps that require scanning a QR code, without explaining how to complete the scan. The designers of these experiences might have provided an overall solution to the customers’ needs, but they failed to consider the cognitive state of the customer in the realistic context of use.

There are several research methods that illuminate the customers’ unique perspective by evaluating cognitive states and identifying knowledge gaps in the experience.

  • For a new product or service, consider conducting a cognitive walkthrough and a usability study with a high-fidelity prototype. Recruit participants who have the same level of familiarity with the product as its intended end-users. Further, the research setting should mimic the actual conditions of use, to the extent possible.
  • For an existing product or service, measure real customers’ behavior with analytics, eye tracking, or unmoderated usability testing in the actual context of use.
  • For both new and existing products, measure task completion rate and time, System Usability Scale (SUS), and ease of use rating as outcome metrics of re-designs and optimizations.

Companies often attempt to save money by conducting research with internal employees, tapping into the same pool of participants, or skipping essential research activities altogether. These actions might increase exposure to the Curse of Knowledge bias. Instead, it is recommended to invest in research that helps understand the cognitive states and perspectives of customers; such an investment is likely to save resources down the line and increase confidence in data-driven decision-making.

2) Invest in onboarding and learnability

The very first use of a system or service is a critical touch-point in any journey. The quote “You never get a second chance to make a first impression” actually has a scientific basis — Behavioral Science research demonstrated (6) that customers are susceptible to the Primacy effect, a tendency to retain a stronger memory of the beginning of an experience.

Industry data shows that most digital products are not making such a good first impression on customers. A recent AppsFlyer report (1) found that 1 in 2 apps are uninstalled in the first 30 days, which is associated with an average loss of $57,000 a month. Website trends are similar with an average of 47% of users abandoning after viewing only one page (the rate jumps to 75% in business-to-business websites).

Reasons for low adoption can be traced back to the Curse of Knowledge bias — decision-makers often assume that customers do not need “hand-holding,” that “they will learn fast” or “it is easy enough to get started.” In reality, customers might struggle with knowledge gaps and perceive the experience as complicated or unclear.

A method to uncover exactly how much help customers need in the beginning is to measure the impact of the initial experience on long-term product adoption and retention. If these metrics underperform, there is a good chance that blind spots and knowledge gaps create friction in a critical touchpoint. After optimizations are made, teams should hope for an uptick in adoption and retention, showing the Return on Investment of reducing bias in design.

The Halo and Horn effects

If people are failing, they look inept. If people are succeeding, they look strong and good and competent. That’s the ‘halo effect.’ Your first impression of a thing sets up your subsequent beliefs. If the company looks inept to you, you may assume everything else they do is inept. — Daniel Kahneman

The Halo effect is a cognitive bias that occurs when a positive impression in one aspect of a product or service shapes the overall impression of it. The Horn effect is the exact opposite — a negative experience in one aspect determines an overall negative perception of the product or service.

Customer Experience data is particularly vulnerable to the impact of Halo and Horn effects due to its reliance on attitudinal data sources assessing customers’ thoughts, feelings, and motivations. Known generally as “Voice of the Customer” (VoC) data, it includes qualitative measures such as interviews, free text feedback, and call center transcripts.

The halo and horn effects might distort the interpretation of VoC data when strongly valenced comments are encountered. A comment such as “I hate your company and will never buy here again” has a strong negative valence that can lead an analyst to interpret other comments as more negative than they really are and report skewed conclusions. A very positive comment can cast a “halo” over other comments and positively bias the interpretation.

The seemingly simple solution for this problem is to ensure a large enough sample size to offset extreme comments. In practicality, however, it is not that straightforward. The sample size is often determined by factors that practitioners can’t control, such as budget, time, recruiting constraints, and survey opt-in rates. Therefore, organizations wishing to de-bias their VOC data analysis should take further measures, as detailed below.

An example of analyst being imacted by the Halo effect, thinking that one positive comments is representative of a feature’s success.
Illustration by the author

1) Create a taxonomy for qualitative data analysis

A taxonomy is an agreed-upon system to classify customer responses into categories. For example, a comment such as, “I started to use the app to better manage my household finances. I like that the app sends reminders to add my expenses, but the account overview is confusing to me” can be tagged according to predetermined categories such as customer motivation, ease of use, and pain points. The tagging could be numerical or textual, as long as it is consistent and done by trained team members.

Once the data is tagged, researchers can proceed to a thematic analysis, looking for overarching patterns. For example, tags such as “intuitive,” “confusing,” “too many steps” may form a Usability category. Based on higher-order categories, teams can create benchmarks, such as high, medium, and low usability.

When researchers are working with coded and benchmarked customer data, they are in a much better position to analyze customer feedback and report accurate insights to stakeholders in a non-biased way.

2) Use Natural Language Processing and text analytics

Natural language processing (NLP) and text analytics refer to the use of Machine Learning algorithms to tag and classify customer responses. Enterprise-level “Voice for the Customer” platforms offer text analytics as a part of their solutions. Smaller companies can utilize data scientists to create custom NLP algorithms for text analysis. In essence, these methods are similar to manual tagging taxonomy, with the added benefits of automation, speed, consistency, and reliability.

3) De-bias stakeholder presentations and executive reporting

Researchers presenting customer data analyses often include verbatim customer quotes to increase empathy. Generally, it is important to socialize the customers’ voice, however, isolated quotes might trigger the Horn/Halo effect and bias the audience. To avoid this, researchers should begin presentations with the big conclusions they’ve derived based on representative data. When quoting customers, they should emphasize that these are examples of overarching trends. If possible, 2–3 quotes should be presented for each topic or insight being discussed to reduce bias.

As stakeholders, ask questions about the global patterns stemming from the quotes and avoid over-interpreting specific comments or individual pieces of evidence.

Some ‘Voice of the Customer’ platforms boast a feature that sends customer feedback comments directly to executives’ phones. These direct comments can help executives get closer to customers, however, when seen in isolation, they can create the wrong impression. To avoid this, executives should inquire with teams if comments reflect an overarching pattern or just an extreme case.

Dunning Kruger effect

“The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.” — Stephen Hawking

The Dunning Kruger effect is a cognitive bias in which people overestimate their level of knowledge or skills in a particular domain. For example, design leaders and practitioners often overestimate their knowledge of customers’ needs and goals. This overconfidence can lead to problematic patterns such as limited empathy to customers, biasing research towards confirming pre-conceptions, underinvesting in research, relying on outdated research, or skipping the research phase altogether.

As a result, the Dunning Kruger effect might lead companies to implement initiatives that miss the mark for customers and catch executives by surprise when investments do not produce the expected return. A “knowns-unknowns” workshop and a “day in the life” research are good methods to reduce the Dunning-Kruger bias.

A graph showing the progression of the Dunning Kruger effect. It starts with people being over-confident about their knowledge, dips into an understanding they have very little knowledge, and evens out a medium point of knowldge.
Illustration by the author

1) Knowns-unknowns workshop

A knowns-unknowns workshop is an efficient way to find out what is really known about customer needs and what are just assumptions. For the best outcomes, conduct a multidisciplinary workshop including the key roles working on the project. During the workshop, work as a group to complete a 2X2 grid (see table below):

Known-knowns are evidence-based facts about customers. Each team member should write down “knowns” and refer to the evidence supporting their statement (evidence can come from market research, UX heuristics, internal UX research, or academic research).

Known-unknowns refer to questions and assumptions about customers that need to be confirmed. This category will inform most of the future research on the workshop’s topic.

Unknown-knowns are considerations such as “I think that the market research team did a survey about that, but I am not sure…”. The action items in this category will be reaching out to other teams in the organization and inquiring about the open questions and assumptions.

Unknown-unknowns refer to potential unexpected information the team does not have hypotheses about. The only action item here is to be humble and prepared to be surprised. Keeping an open mind will help organizations to uncover unknown-unknowns early enough in the research and planning phase, when there is still time to adapt, versus after release, when it’s harder to make big changes.

The essence of mitigating the Dunning-Kruger bias is realizing that, most likely, we know less about customers than we think. Therefore, being truly customer-centric means being humble, open-minded, and able to adjust quickly to changing demands.

A table explaining the combinations of Knowns and Unknowns in customer research.
Table created by the author

2) “A day in the life” research

The purpose of “day in the life” research is to understand the daily reality of customers that use the product or service. The classic way to conduct this study is to shadow people in their natural environment and observe their tasks, habits, and routines to understand how the product/service can address their needs. With that, a “day in the life” research can also be conducted using online methods. Be it in-person or online, the most important aspect is observing customers from an exploratory mindset and trying to avoid overconfidence and a confirmatory attitude.

Research triangulation: an overarching de-biasing strategy

In this article, we’ve covered three cognitive biases that can negatively impact UX measurement, decision-making, and business outcomes. However, there are multiple additional biases that might impact core processes and optimization programs. One of the best methods to hedge against all biases in data collection and interpretation is triangulation of research.

Research triangulation is the use of more than one approach to answering a question. For example, evaluating the success of a new feature using analytics data, surveys, and free-text feedback. If the results from all methods align and point to the same conclusion, the research has convergent validity which increases confidence that the observed results describe customers’ reality versus the researchers’ biases.

Takeaways

  • Cognitive biases are subconscious errors in thinking that impact individuals and teams in all levels of experience and seniority. Left unmanaged, these biases can skew customer data analysis and interpretation and lead to wrong business decisions.
  • The Course of knowledge, halo/horn effect, and the Dunning Kruger effect might cause professionals to inadvertently compromise UX and service design.
  • An overarching method to manage cognitive biases is triangulating insights by using a variety of research methods. More specific “de-biasing” methods include the use of machine learning to analyze customer feedback, conducting contextual inquiry research (e.g., “a day in the life” study), and investing in customer onboarding.
  • Speaking openly about cognitive biases and taking consistent action to mitigate them will help prevent negative impact on UX measurement and place organizations in a better position to create products and services that provide value, delight customers, and foster advocacy.

References

  1. AppsFlyer, 2020, ​​The uninstall threat: 2020 app uninstall benchmarks, <https://www.appsflyer.com/resources/reports/app-uninstall-benchmarks/>
  2. McCormick, J. , 2021, It’s Time To Get Serious About CX Data And Technology<https://www.forrester.com/blogs/cx-technology-portfolio/>
  3. Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
  4. Morgan, B., 2021, 10 Stats Showing The Growth Of CX, <https://www.forbes.com/sites/blakemorgan/2021/08/09/10-stats-showing-the-growth-of-cx/?sh=4fd1d6fe5f23
  5. Newton, E. L. (1990). The rocky road from actions to intentions (Doctoral dissertation, Stanford University)
  6. Van Erkel, P. F., & Thijssen, P. (2016). The first one wins: Distilling the primacy effect. Electoral Studies, 44, 245–254.

--

--

CX Insights lead and researcher @ Oracle. I help improve product and service design with research and behavioral science.