Alternatives to Conjoint Analysis


There are several alternatives to conjoint analysis, some of which are outdated, while some serve slightly different purposes. Please feel free to contact us for help if you are evaluating the different methods.

MaxDiff

MaxDiff (Maximum Difference Scaling, BWS, or Best–Worst Scaling) is a statistical technique that help prioritise different items, such as product features. Unlike conjoint, MaxDiff usually does not look at products as combinations of levels grouped by attribute. In MaxDiff, researchers are free to examine features in a more haphazard manner, which comes at the cost of limited analytical capabilities and limited depth of insight.

Each MaxDiff question presents a set of 2 to 5 features from which the respondent has to indicate which feature is most important or most desirable, and which is least important or desirable. For example, respondents may be asked to choose the most attractive and unattractive features of smartphones:

Most attractiveFeatureLeast attractive
6” screen
Silver colour
One month of free mobile data included
No physical “Home” button

MaxDiff and Best–Worst Scaling are considered to be slightly different techniques, with various flavours of both suited to specific research needs. MaxDiff is generally a less powerful technique than choice-based conjoint when it comes to simulations of consumer preferences. That said, you can use MaxDiff on Conjointly.

Self-explicated conjoint

First of all, “self-explicated conjoint analysis” is not conjoint analysis. It is an older technique that attempts to present similar outputs as conjoint, but does that in a subpar fashion. Conjointly does not offer self-explicated conjoint.

What is the similarity with conjoint analysis?

Like normal conjoint, this technique assumes that people’s preferences for products are a sum of preferences for the different features (levels of attributes). Mathematically speaking, this model can be expressed as

$$ P_c = \sum_j {D_{jk}I_j} $$

where:

  • Pc is the preference for product concept c,
  • Djk is the desirability of the applicable level k of attribute j,
  • Ij is the importance of attribute j.

What is different?

Unlike in normal conjoint, where respondents see sets of carefully constructed questions about which product they would choose or buy, in self-explicated conjoint, respondents typically go through two questions:

  1. All levels of all attributes are presented to the respondent and each level is evaluated for desirability (using a Likert scale or a 0 to 100 scale). Levels are often grouped by attribute.
  2. To each respondent, the survey will show the most desirable level of every attribute (as reported by the respondent in the previous question). These levels are evaluated in a constant sum question to assign relative attribute importance scores.

What is wrong with it?

This approach is proven to be subpar. Theoretically, it is not supported by marketing theory (e.g., the random utility theory). Practically speaking, studies that involve self-explicated conjoint produce unreliable numbers. The reason is that they do not utilise three important properties of choice-based conjoint analysis:

  • Conjoint is about asking people to “consider jointly”. In self-explicated conjoint, respondents are not given a chance to make trade-offs between products, they are only asked to view levels and attributes separately.
  • In self-explicated conjoint, respondents are not making choices like they would in real life. Respondents are asked to self-assess the factors that drive their decisions, yet in reality, people are often unable to articulate what drives their behaviour (they might not know themselves or might be afraid to say).
  • Self-explicated conjoint is not based on regression analysis, which allows to calculate confidence intervals and make sound market share simulations.

In conclusion, we recommend that choice-based conjoint be used instead of self-explicated conjoint.

Two-attribute trade-off analysis

Two-attribute trade-off analysis was an early conjoint-like research technique where respondents were shown a series of trade-off tables. Each table would contain all the possible combinations of levels for two attributes. Respondents would then need to rank each column in the table according to their preference.

For example, consider a product that has three attributes, two levels in each:

  • Size
    • Big
    • Small
  • Colour
    • Blue
    • White
  • Price
    • $10
    • $20

Each respondent would then be shown three tables (questions), with four concepts in each:


Question 1

Alternative 1Alternative 2Alternative 3Alternative 4
SizeBigBigSmallSmall
ColourBlueWhiteBlueWhite
Rank (1 to 4)

Question 2

Alternative 1Alternative 2Alternative 3Alternative 4
SizeBigBigSmallSmall
Price$10$20$10$20
Rank (1 to 4)

Question 3

Alternative 1Alternative 2Alternative 3Alternative 4
ColourBlueBlueWhiteWhite
Price$10$20$10$20
Rank (1 to 4)

This approach has less cognitive load on participants and is simple to implement. However, it has major drawbacks because of which it is not currently used in practice:

  • The exercise is unrealistic because real-life alternatives do not present themselves for evaluation as combinations of two attributes.
  • If there are many attributes or levels, respondents get bored and often develop quick patterns to get through the survey quickly.

There were a number of related trade-off approaches, such as SIMALTO (Simultaneous Multi-Attribute Level Trade-Off). Choice-based conjoint has come to replace all such outdated methodologies.