Technical points on DCE with Conjointly


This note is prepared for those familiar with the specifics of discrete choice experimentation to answer key questions in detail. Please contact the team if you have any further questions about the methodology.

DCE or conjoint?

Conjointly uses discrete choice experimentation, which is sometimes referred to as choice-based conjoint. DCE is a more robust technique consistent with random utility theory and has been proven to simulate customers’ actual behaviour in the marketplace (Louviere, Flynn & Carson, 2010 cover this topic in detail). However, the output on relative importance of attributes and value by level is aligned to the output from conjoint analysis (partworth analysis).

Overall, conjoint analysis is a popular discrete-choice method that focuses on identifying customer utilities (a.k.a. “part-worth utilities” or “relative preferences”).

Experimental design

Conjointly uses the attributes and levels you specify to create a (fractional factorial) choice design, optimising balance, overlap, and other characteristics. Our algorithm does not specifically attempt to maximise D-efficiency, but it tends to produce D-efficient designs. It tends to produce designs of resolution IV or V (as such, it does support measurement of two-way interactions, even though they are not used in our modelling at this stage).

In most cases, the number of choice sets is excessive for one respondent and the experiment is split into multiple blocks (one block for each respondents). Therefore, we do support individualised designs (i.e., every respondent has their own block). Each choice set consists of several product construct alternatives and, by default, one “do not buy” alternative.

It is possible to set up prohibitions (simple pairs and much more complex ones).

Brand-Specific Conjoint is effectively an alternative-specific design (with interactions-only, no-main-effect nested variables).

You can review the experimental design as part of the Excel export.

Are survey respondents required to complete all available blocks?

No, each survey respondent only needs to complete one “Block”, which is randomly assigned to them.

Minimum sample size

Conjointly automatically recommends a minimum sample size. In most cases, it is between 50 and 300 responses. In our calculations, we use a proprietary formula that takes into account the number of attributes, levels, and other experimental settings.

What is Hierarchical Bayes?

Conjointly uses Markov Chain Monte Carlo Hierarchical Bayes (MCMC HB) estimation to calculate individual-level preference coefficients. Hierarchical Bayesian modelling is a statistical model in conjoint analysis is a type of modelling that estimates parameters (partworth utilities) not for market as a whole, but rather for individuals.

The word “hierarchical” refers to the nested structure (individuals are nested in the market). “Bayesian” refers to the statistical paradigm of Bayesian statistics that is based on Bayes’ theorem.

This approach has three benefits over the “traditional” (market-level) models, where preferences are assumed to the same across all respondents:

  • Individual-level coefficients help market share estimation account for heterogeneity of preferences in the market.

  • They allow for segmentation of respondents by their preferences.

  • This approach allows more parameters (attributes and levels) to be estimated with smaller amounts of data collected from each respondent.

Simpler regressions are often not suitable because of heterogeneity of preferences. For example, imagine 50% of people like feature A and 50% of people like feature B. If we use an aggregate-level model, we will end up averaging between these two groups (average preference for both will be close to zero) and thus will fail to capture the importance of this factor for decision-making.

How can I interpret relative preferences, and how are they computed?

Conjointly uses Markov Chain Monte Carlo Hierarchical Bayes (MCMC HB) estimation to compute relative individual preferences. Hierarchical Bayesian models are frequently used in conjoint analysis to estimate parameters for relative utilities. Those coefficients give an approximate of how popular an option is for respondents. Please visit how to interpret Part-Worth utilities for more information

Marginal willingness to pay (MWTP)

For experiments where one of the attributes is price, Conjointly estimates a separate model with price as a numerical variable. We also perform checks for appropriateness of calculation of the measure, taking into account both the experimental set-up and the received responses (for example, limiting MWTP calculation in cases where there is non-linearity in price). Marginal willingness to pay is only an indicative number.

Share of preference simulation

Share of preference simulation is performed using individual coefficients from the estimated HB multinomial logit model. Two models for calculating market shares are available:

  • Share of preference model, which is appropriate for low-risk or frequently purchased products: FMCG, software, etc, where there is a strong impulse buying behaviour in play. This model is applicable in the vast majority of applications. By default, Conjointly will set the type of product to be low-risk, where people tend to make rational decisions on all purchases.

  • First choice model, which is suitable for high-risk or seldom purchased products: education, life insurance, pension plans, etc, where the stakes are high, life-or-death situation such as chronic medication options, and it will use up a large portion of respondent’s disposable income. In high-risk simulations, people tend to select the single most preferred product even when it is only marginally more preferable than the next best product.

Partworth utilities (level scores)

Conjointly estimates a Hierarchical Bayesian (HB) multinomial logit model of choice using qualified conjoint responses. The final coefficients in this model are the partworth values of each level — these values reflect how strongly each level sways the decision to choose one alternative over another. When running preference share simulations, the partworth utilities are used to compute relative preference for the examined alternatives.

Note on how we display level scores in the report: The raw conjoint utility scores (which appear in the “Individual preferences” tab of the excel export), are post-processed before being displayed in the online report.

Can I calculate part-worth utilities manually?

You can manually obtain utility scores for experiments such as generic Conjoint, Brand-Specific Conjoint, Claims Test, Product Variant Selector, Brand-Price Trade-Off, and MaxDiff. To compute part-worth utilities manually, please follow these instructions. In summary, you will compute attribute importance as the spread of part-worth values within an attribute. The resulting scores are the relative attribute importance. Attributes with large variations in the sway factor are deemed more important.

Transforming conjoint raw utility scores into displayed level scores

The following formula applies to Generic Conjoint, Brand-Specific Conjoint, Claims Test, Product Variant Selector, Brand-Price Trade-Off, and MaxDiff.

  1. Calculate the average preference across individuals for each level.

  2. Within each attribute, zero-centre the partworth utilities to set 0 as average. This is performed through subtracting the value for each level from the sum across the corresponding attribute.

  3. Scale the partworth utility across each attribute. This is done through dividing the level utility by the overall utility range for all attributes which is the sum of range utility of each level’s average utility across respondents for each attribute.

Check out the step-by-step guide on calculating the partworth utilities by levels.

Relative attribute importances

We can also estimate a rough measure of relative importance of each attribute in the choice decision. Specifically, we calculate attribute importance as the spread (maximum minus minimum) of partworth values within an attribute, normalised across attributes for each individual.

  1. Calculate the range of preference within each attribute for each individual. This is defined as the maximum preference value within each attribute, minus the minimum for each individual.

  2. Calculate the importance ratio of each attribute for each individual. This is the range of preference for each attribute, divided by the total sum of all range of preferences for the individual.

  3. (Brand-Specific Conjoint only): Multiply by a scalar value (the exact value will depend on the specific experiment — specifics of this calculation are proprietary).

  4. Calculate the average importance across respondents, through averaging the importance ratios across all respondents.

The resulting scores are the relative attribute importances. Attributes with large variations in the sway factor are deemed more important.

Note on Brand-Specific Conjoint experiments: Be careful when looking at raw utility values in the “Individual preferences” tab of the Excel export. Attributes are repeated for each ‘brand’, so there may be several more than you defined in your experiment.

Check out the step-by-step guide on calculating the partworth utilities by attribute.

Relative performance of brands (Brand-Specific Conjoint only)

In Brand-Specific Conjoint experiments, the partworth utility calculation is the same; however, the reported relative performance of brands is calculated a bit differently. These values are based on the best performing combination of levels within each ‘brand’. Specifically, each value is the sum of the partworth utilities of the levels that comprise that brand’s best offering.

Importantly, relative performance of a brand will be affected by the features that applied to that brand, especially if one of the brands was shown with unusual or unrealistic features or price levels.