Understanding and Maximizing Your Use of Kano Analysis
The most common trade-off analyses, MaxDiff and conjoint, are excellent techniques for quantifying feature preferences, but don’t always tell the whole story. Features have nuances, and often the calculated utility score for the most critical features does not align perfectly with their true importance.
This is the second in a series of feature analysis articles from MDC Research, examining how the Kano method can be used to understand the nuances of features that go beyond utility scores. Whether you’re a seasoned researcher looking for additional professional insights on techniques you’re already using, or a novice when it comes to advanced analytics, we trust you will find some value in these easy-to-digest summaries.
What is Kano?
Rather than a rank ordering of features, the Kano method uses a series of questions to classify features into categories representing their impact on perceptions of a product or service. Understanding the nature of how features are perceived identifies which are required, which are differentiators, which are performance (more is better), and which features result in indifference from the target audience.
Why use Kano?
Let’s look at a real-world example: testing the features of a new car. In this scenario, we’d start by determining what we need from the analysis. If we’re planning to include all tested features in the final offering, a MaxDiff might be used to help us decide which features are most marketable. If we’re looking to offer various combinations of features and prices (e.g., standard vs. “fully loaded” models), a conjoint would useful in helping us package these offerings. Neither of these approaches, however, will tell us what happens if a feature is not present in the final product, or which ones might differentiate us from competitors. So, if we need to make decisions on which features we should/shouldn’t include, the Kano method might be the way to go.
Kano ultimately categorizes features into four key categories:
- Attractive features (often called delighters or differentiators), or those which evoke feelings of delight when present in a product.
- Performance features, or those which are tied linearly to satisfaction (the more that is provided, the more satisfied we become).
- Must-Have features, or those which are expected in the product. When making decisions about which features to include in a product, having an understanding of what falls in the must-have category is key: while the feature’s presence will not necessary increase satisfaction, it’s absence can substantially decrease this measure.
- Indifferent features, or those which do not impact consumers’ perceptions.
Using the car example, Kano would ultimately tell us that blind spot monitoring might be considered an attractive feature, gas mileage would be a performance (more is better) feature, climate control would be a must-have, and consumers might be indifferent towards having a sunroof.
So, what does this mean? It’s fairly intuitive that we’d want to invest in, and market, our state-of the-art blind spot monitoring system, and continue to push for higher gas mileage in our vehicles. We can include sunroofs, but might make the decision not to, so we can invest more in our attractive and performance features (as the sunroof’s presence or absence won’t impact consumer perceptions).
But what about the climate control system? Obviously, we wouldn’t need an analysis plan to tell us that consumers expect to be able set a comfortable temperature in their car, but there are many instances where features are less cut and dried. If we’re not sure of whether or not a feature is “tables stakes,” we might overlook it in a more traditional prioritization approach like MaxDiff. While, as researchers, we want to believe that the most critical features would float to the top of any tradeoff exercise, we see time and time again that this is not the case. Human behavior is pretty predictable, and the truth is that, while required, must-have features tend to be less exciting, and may be overlooked for something more innovative or flashy. A survey respondent looking for the latest in-car navigation technology might choose this feature repeatedly over climate control when making MaxDiff tradeoffs, as they’re already assuming climate control would be present.
How are Kano classifications determined for a set of features?
Kano is a simple technique in terms of survey design and programming, as well as administration. Because the questions are simple, do not require a visual representation, and don’t involve choosing from a long series of features, the question series can be administered through desktop and mobile surveys, and even by telephone.
Kano is administered using a set of two questions for each feature. Respondents are asked how they’d feel if the feature were present in the product, and how they’d feel if it were absent (both times choosing from a small set of response options). After asking the question set, Kano classifications are calculated at the respondent level, and plotted using a simple quadrant chart.
How can I get the most out of Kano?
One of the reservations we hear from our clients is that, while effective in conveying the dimensions and nuances of each feature, the more traditional “importance” rating is lost. One option to mitigate this is to calculate derived importance using the sum of the two aggregate measures (sat if present + dissatisfaction if absent). This measure is closely correlated with self-reported importance ratings (our research has repeatedly shown correlations ranging between 85% and 95%). Using the sum of Kano classifications as a proxy for importance allows survey design and collection to be more streamlined and efficient, potentially leading to shorter questionnaires, reduced risk of respondent fatigue, and higher quality data.
Another option is to use a split sample to pipe respondents proportionally through either a Kano or MaxDiff exercise, for a fully comprehensive view of feature perceptions and importance. While requiring an adequate sample size to feed both approaches, this methodology is highly effective in telling the full story of a feature set.
If you’d like to learn more about Kano, or determine whether Kano is right for your upcoming study, please reach out to discuss in more detail.
Jakob Lahmers — Vice President, MDC Research
MDC RESEARCH – 8959 SW Barbur Blvd., Suite 204 Portland, OR 97219 – (800) 344-8725
Learn more at: www.mdcresearch.com Copyright 2018, MDC Research