Price testing: a guide to inject your design perspective into the conversation

Current approaches to testing price are often not grounded in how customers think, feel, and behave – leading to false precision, pushing away your customers, and leaving revenue on the table. Enter Design.

Tom Cleary
UX Collective

--

Price Tags
@angelekamp

The ability to determine the optimal price for a product or service is one of the most critical steps in any development or commercialisation process. Charge too much and you run the risk of nobody engaging with your product or pushing customers away, charge too little and you’ll leave revenue on the table.

From a Design perspective, there is also something incredibly human about price. It’s that visceral reaction you feel when you’ve been overcharged for a mediocre dinner, the high expectations you place on a product because it’s expensive, or the rush of excitement you experience at a sale. However, because it elicits these feelings, there is also something inherently irrational about price which has implications for how we understand and test it. Customers will likely be forthcoming in their attitudes to price, or how it makes them feel, but they often aren’t able to describe how these factors influence their behaviour and purchase decisions.

Yet for something which is such a critical driver of new product or service success, and which operates at the core of consumer behaviour, it’s often not part of the Product or Service Design conversation — instead left to the analytical folk or guardians of the business case. The consequences of this are significant, in that current approaches to understanding and testing pricing may not be executed well or grounded how customers think, feel and behave. At best this is wasted internal capability and capacity with redundant tests — at worst this is leading to false precision, pushing customers away and leaving revenue on the table.

This article outlines a path to ‘better’ Design-led price testing in a field which has been dominated by Economists and Market Researchers. This approach embraces specialist skillsets to employ a combination of both experiment and survey methods based on 3 key factors:

  1. Product or service attributes: The nature and complexity of the product or service you are testing (e.g B2B / B2C; digital product v consumer product).
  2. Innovation type and phase: The type or phase of innovation the product or service is currently at (e.g new v existing; research / development / commercialisation).
  3. Specialist skillsets: The skills and capability of your team to effectively execute tests and generate insights to inform pricing strategy (e.g Growth / Market Research / Design)

Back to basics: Different approaches to testing price

There are a range of research methods you can utilise when it comes to testing price. These can be primarily distinguished by those which are based on revealed preference (behavioural data about choices customers have actually made) or stated preference (declarative data about simulated choices customers haven’t actually made). Revealed preference approaches include market data and online/offline experiments, while stated preference approaches include customers surveys, discrete choice analysis and conjoint analysis.

Based on research conducted by C. Breidert, M. Hahsler, T. Reutterer, ‘A Review of Methods for Measuring Willingness-to-Pay’, Innovative Marketing, Vol. 1(4), Vienna (2015)

A key challenge is that the vast majority of price testing over-indexes on stated preference approaches through direct or indirect surveys as opposed to revealed preference. This is driven by a range of factors such as scale and statistical significance, cost, not having the right skill-sets, ease of set-up and time — or often because we are creatures of comfort and it may have been the only method we’ve utilised. The impact of this can be significant, with teams wasting time with redundant tests, being mislead through false precision and pushing prospective customers away. Before we explore these limitations in-more detail — let’s get under the hood of stated preference methods.

Stated preference deep-dive: Van Westendorp method

Survey approaches such as the Van Westendorp Price Sensitivity Method are commonly used to determine an ‘optimal’ price point and range for a product or service by asking the following questions:

  1. At what price would it be so low that you start to question this product’s quality?
  2. At what price do you think this product is starting to be a bargain?
  3. At what price does this product begin to seem expensive?
  4. At what price is this product too expensive?

Results from this survey can be visualised in a graph with price on the x-axis and the number of respondents on the y-axis (see below). This method enables you to directly plot the cumulative number of people who believe that as the price rises the product is too cheap that they would question quality (blue), too expensive that they would not purchase (red), a bargain (blue dotted) or generally expensive (red dotted).

Graph showing outcomes of a Van Westendorp Price Sensitivity Analysis
Source: https://medium.com/@AndrewPierno

This analysis provides three key data points to inform pricing:

  • The Point of Marginal Cheapness (intersection of ‘too cheap’ and ‘expensive’) which is the lowest price that should be charged for a product or service; anything less expensive you’ll be over indexing on people who don’t trust your product.
  • The Point of Marginal Expensiveness (intersection of ‘too expensive’ and ‘cheap’) which is is the highest price that should be charged for a product or service; anything more expensive and you’ll be over indexing on people who won’t pay for your product.
  • The Optimum Price Point (intersection of ‘too expensive’ and ‘too cheap’) which represents the optimal price as it results in the lowest % who would not consider the product or service (because it’s too cheap or expensive). It should be noted that this represents an optimal price solely from a demand perspective as it does not address supply or cost.

While survey methods like the Van Westendorp are effective at determining a price range if executed correctly (e.g targeting correct customer segments), there are a number of limitations from a Design perspective which may impact the reliability of results and necessitate teams augmenting this with other methods.

Lets get real: Limitations of survey approaches

💔 No skin in the game

Both direct and indirect surveys are based on simulated choices that don’t require respondents to actually buy a product or service. Put simply there is no commitment, obligation or skin in the game. There is a significant amount of literature highlighting how responses to these questions deviate significantly from actual willingness to pay and purchase behaviour through ‘hypothetical bias’ (see references). This bias often results in an overestimation of price and willingness to pay (Harrison and Rutström 2002).

👂🏻 The under-appreciated power of context

Surveys are not conducted in real purchasing environments and are limited in their ability to replicate contextual factors that influence the price an individual is willing to pay. Research by Boston Consulting Group (BCG) highlights the key role of the contextual factors (such as the time of day, proximity to pay-day, whether the item is a gift and who it is for, or who a person is shopping with at the time) in shaping customer needs and purchase decisions — all of which are impossible to meaningfully replicate through survey approaches. Importantly, contextual factors are often more influential than demographics and attitudes in certain categories and markets.

Data illustrating importance of context when it comes to purchasing decisions across product and service categories
Source: Boston Consulting Group, ‘Demystifying Global Choice’, (2020),<https://www.bcg.com/publications/2020/understanding-global-consumer-choice>

🛑 ‘High involvement’ products or services

Directly asking willingness to pay for high-involvement products or service, or those which are new to market through surveys can be challenging for respondents and may impact the reliability of results. High-involvement products or services have a higher risk to purchasers if they fail, are complex, and / or have high price tags. Given the perceived risk, these types of products and services typically have longer purchase journeys and involve more up-front research. This challenge is compounded with low purchase frequency. See below for illustrative examples:

Source: https://www.futurelearn.com/info/courses/online-business-success-profiling/0/steps/23065

Consumer electronics are a great example of high-involvement products, given they are often relatively expensive and have ambiguous features and acronyms which necessitate most consumers being handheld through the purchase journey. Similarly, financial products such as superannuation are high-involvement as they come with financial risk and required baseline financial literacy that many consumers don’t have. Testing types of products or services through a survey would not be recommended as there is no ability to support respondents with likely questions that would inform willingness to pay.

Stated preference deep-dive: Conjoint interviews

Conjoint analysis is a complex type of market experiment simulating a real-world purchasing situation through surveys. Respondents are presented several scenarios with varying product alternatives at different price levels. During the experiment, respondents evaluate the different alternatives and related attributes, and thus indirectly reveal their preferences for, or perceived value of, attributes when choosing between the alternatives. We have had some success conducting these surveys while also integrating the approach into customer interviews — using Miro to create stimulus to showcase alternative attributes and price. While this doesn’t have the scale of Westendorp and still suffers from being stated and not revealed preference — the qualitative insights you’ll likely get from the simulation will be invaluable if executed correctly. The benefit of conducting this in an interview is that the facilitator is able to provide scenarios and additional context to support the respondent.

Revealed preference deep-dive: In-market experiments

As you can probably guess — I’m a strong believer that survey approaches need to be augmented with revealed preference and the use of market experiments. These are typically executed in conjunction with Growth and address key limitations of survey approaches by testing in-context at the point of sale.

Smoke tests🚬

The term ‘smoke testing’ originates from the realm of hardware development. If you power on a circuit board for the very first time and you see smoke rising, you know it’s broken. In the world of Design and Growth, smoke testing has a similar meaning. But instead of focusing on whether the product will break or not, we want to validate a number of hypotheses before we launch a new concept, product or feature.

The most common reason to smoke test is to test the market desirability of a concept with real consumers before we’ve even designed our product or service. And most importantly, find out if these consumers are willing to pay it in a live market environment.

In a nutshell, smoke tests mimic the environment of a live product in the market. Giving consumers the impression they can buy (or use) your product or service, typically before it’s even been built. This type of revealed preference experiment helps us to pre-launch our ideas in a market to validate the launch of a Minimum Viable Product (MVP). Saving us time, money and potentially a huge amount of disappointment.

To test the key price hypothesis, there are a number of smoke testing techniques we can leverage, depending on a few factors such as time, budget and resourcing constraints. For each of the smoke tests below, you need the ability to drive traffic from a market to your smoke test, from a relevant channel that allows you to target your relevant personas, archetypes or demographics.

Smoke test #1: Pricing page 💸

Designing a pricing page and testing real consumer interactions is a strong smoke testing technique. These quantitative insights enable us to validate (a) whether a potential customer is prepared to pay for our concept, and (b) at which price package they find it most desirable. Once a potential customer clicks on the respective plan, they’re presented with a ‘coming soon’ page where they can opt to pre-register their interest and be notified when the concept launches. This technique can work effectively and is relatively cheap and quick to get live in the market. We can also measure sign-ups (building our initial list of users), bounce rates, conversion rates, unit economics, and more.

Smoke test #2: Pre-order form 📄

Similar to the Pricing Page smoke test, but a mockup of the concept’s product(s) is showcased with a ‘Pre-Order’ form. If you are testing multiple tiers, specs or quantities, these can be included within the form to gauge the desirability of different tiers. Once the form has been submitted, explaining through a pop-up or notification that the user will be notified when the product is ready (obviously, don’t take payment until then and be clear with expectations).

Smoke test #3: Fake door test 🚪

The Fake Door technique is a lower fidelity method of testing pricing. It requires creating multiple adverts to test (a) the desirability of a concept, and (b) at which price point it is most attractive, typically through digital marketing. The advert features a mock-up design of your concept, that can either redirect users to a ‘coming soon’ page where they can pre-register, or a simple lead generation form (‘give us your email’) that sits within the advertising platform. This method enables you to test different variations of price points and packages, to the same cohort of users, and gauge the desirability of each depending on the conversion rates (click-through rates) of each variation. Furthermore, you can start to build out your audience list to promote your MVP.

Towards a ‘better’ approach to price testing

The inconvenient truth is that there is no silver-bullet method to price testing. Instead, it requires cross disciplinary skill-sets and an ability to use a variety of approaches to triangulate the answer. Next time you’re faced with a question on price, ensure that you have sufficient time to take a step back and effectively evaluate the different methods based on your research objectives. If you’re finding this challenging consider your:

  1. Product or service attributes: The nature and complexity of the product or service you are testing (e.g B2B / B2C; digital product v consumer product).
  2. Innovation type/stage: The type or phase of innovation the product or service is currently at (e.g new v existing; research / development / commercialisation).
  3. Internal skillsets: The skills and capability of your team to effectively execute pricing tests and generate insights to inform pricing strategy (e.g Growth / Market Research / Design)

In addition to this — consider your industry and the market you are testing in to determine the best course of action. Is your product low cost and frequency? Is your market more attitudinally driven as opposed to contextual? Answers to these questions will have implications for your research approach. For example, insurance products are less influenced by contextual factors relative to consumer packaged goods, while the Chinese market is more attitudinally driven relative to the French which is more contextual (BCG 2020). If your research stretches across markets, consider adapting your questions to each market to achieve the best results.

When using surveys bring context: Include relevant contextual information or prompts to support respondents and reduce cognitive load. For example, prompt respondents to think about the typical time of day or occasion they might make the purchase for, or let them know about the different payment options available in the simulation. Research has shown that consumers who pay by credit card are likely to have a higher willingness to pay than those who pay with cash (Prelec and Simester 2001). Reminding respondents of these options is likely to increase the reliability of results.

And finally... de-bias yourself and respondents: Debiasing refers to the use of techniques to reduce cognitive biases. In the context of price surveys, this could include letting respondents understand the consequences of their answers, urging honesty before the survey or making them aware of biases and how they may experience them. You’ll also need to consider if you are propagating biases through the survey. A common pitfall is anchoring, which refers to tendency for respondent decisions to be influenced by a particular reference point or ‘anchor’. If a respondent confronted with a price ‘anchor’ in a situation where they are uncertain about a product or services value, the respondent may regard the proposed amount as conveying approximation of the true value (Kahneman, Slovic, Tversky 1987).

To sum up:

  • Bring your Design perspective into pricing conversations. It’s likely been missing.
  • Accept that there is no silver bullet. Use the key questions in this article to select the multiple methods based on the type of product or service you are designing, where you are in your innovation journey and the skillsets you have at hand.
  • Use pricing conversations as an opportunity to experiment with revealed preference approaches and in-market experiments to de-risk your research.
  • Debias and build context into questions when utilising direct and in-direct survey approaches improve reliability.

Written with Adam Hardy 💯

References

Boston Consulting Group, ‘Demystifying Global Choice’, (2020), <https://www.bcg.com/publications/2020/understanding-global-consumer-choice>

C. Breidert, M. Hahsler, T. Reutterer, ‘A Review of Methods for Measuring Willingness-to-Pay’, Innovative Marketing, Vol. 1(4), Vienna (2015)

D. Kahneman, P. Slovic, A. Tversky, ‘Judgement under uncertainty: Heuristics and Biases’, Cambridge University Press, New York (1982)

D. Prelec, D. Simester, ‘Always Leave Home Without It: A Further Investigation of the Credit-Card Effect on Willingness to Pay’, Marketing Letters, Vol. 12(1), (2005)

M. Le Gall-Ely, ‘Definition Measurement and Determinants of the Consumer’s Willingness to Pay: a Critical Synthesis and Directions for Further Research’, Recherche et Applications en Marketing, Vol. 24(2), Paris (2009)

R. Hofstettera, K.M Miller, H. Krohmerc, Z.J Zhangd, ‘De-Biased Direct Question Approach to Measuring Consumers’ Willingness to Pay’, International Journal of Research in Marketing, Vol. 38(1), Amsterdam (2021)

Van Westendorp Method: <https://en.wikipedia.org/wiki/Van_Westendorp's_Price_Sensitivity_Meter>

Gabor–Granger Method: <https://en.wikipedia.org/wiki/Gabor–Granger_method#:~:text=The Gabor–Granger method is,respondents are willing to pay>

Hypothetical Bias: <https://catalogofbias.org/biases/hypothetical-bias/#:~:text=Hypothetical bias occurs when individuals,they would do in reality>

--

--