Five Questions to Determine if MaxDiff is Right for Your Study
As researchers, we often face the challenge of determining how best to prioritize, optimize, or bundle features. With so many factors to consider, from both the feature design and survey administration perspectives, choosing the right analytical technique can be daunting. In fact, guidance in understanding which analytical technique to use is one of the most common requests we receive from our clients.
This is the first in a series of articles from MDC Research geared to demystify these analytical choices. Whether you’re a seasoned researcher looking for additional professional insights on techniques you’re already using, or a novice when it comes to using advanced analytics, we trust you will find some value in these easy-to-digest summaries.
What is MaxDiff?
Maximum difference scaling (more commonly known as MaxDiff) is a trade-off exercise which, when used appropriately, is a powerful tool for feature/attribute prioritization. From the survey administration perspective, the questions are straightforward and intuitive: respondents simply choose the best and worst options from a list of 4-6 items (typically on a web survey screen). This task is repeated for several screens, until all attributes have been measured. Respondents do not need to attempt to rank order large sets, use rating scales (which can be tedious, and are vulnerable to scale usage and cultural biases), or remember features or choices from previous screens.
From an analysis perspective, the derived MaxDiff “importance” scores are far better differentiated than traditional ratings (how many times have you seen all features with an average rating of ~8.5 on a 1-10 scale?), and the forced differentiation can be used to support additional analysis such as TURF or segmentation.
How do I know if MaxDiff is right for my study?
Answer these five questions to find out.
Do you have a manageable list of attributes?
MaxDiff works best with no more than 12-15 total attributes. While it is possible to test more than 15, larger lists incur respondent fatigue and random selections, rather than carefully considered best and worst options. Too many attributes can also result in utility scores with limited differentiation. If you’re looking to test more than 15 attributes, consider narrowing down the list internally before designing the survey. If it is absolutely critical to evaluate a larger set, try breaking it up into two MaxDiff exercises, or even creating two separate surveys.
Are your attributes independent (meaning not dependent upon each other)?
If attribute A is required in order for attribute B to exist, then MaxDiff is not appropriate. For instance, if you’re comparing features of an auto insurance policy, it would not be feasible to trade off liability coverage and towing coverage, because you cannot add tow coverage without the baseline liability policy. Because every attribute is evaluated against all others, each must be able to stand alone for the analysis to work as intended.
Are you testing attributes without evaluating tiers or levels?
Because MaxDiff treats each feature as independent, this analysis does not allow for evaluation of tiers or levels within a feature. If different feature levels or tiers must be incorporated (e.g., basic, upgraded, and premium levels of auto infotainment packages), consider a conjoint or discrete choice design, which will provide utility scores for the attribute as well as feature levels within each attribute. We’ll cover these techniques in more detail in a future article.
Are all features on a level playing field?
In some cases, attributes that are basic and required (“must haves”) will not perform as well in a MaxDiff exercise as “sexier” attributes that tend to excite respondents. As an example, a steering wheel might end up with a lower MaxDiff score than a turbocharger, but that doesn’t mean that a steering wheel should be abandoned in the car design.
Is pricing external to analysis of the set of attributes?
While the concept of price can be compared with other attributes in MaxDiff (e.g., price vs. quality in the buying decision), there are better analytical techniques for evaluating actual price points. When the goal of the prioritization exercise is to evaluate feature preferences at various price points, MaxDiff should not be used.
If you can say “yes” to all of the questions above, MaxDiff is likely to be the right trade-off analysis for your project. Whether you’re looking to for help designing the full scope of a MaxDiff study or just want someone to “check your work,” we’re always available to assist.
Jakob Lahmers — Vice President, MDC Research
MDC RESEARCH – 8959 SW Barbur Blvd., Suite 204 Portland, OR 97219 – (800) 344-8725
Learn more at: www.mdcresearch.com Copyright 2017, MDC Research