Case Studies & Sector Articles

White Papers

Measuring the Importance of Customer Satisfaction Attributes

A comprehensive satisfaction survey analysis will include a method for helping management establish priorities. One common approach is a Quadrant Analysis, which plots satisfaction levels (or gaps) against importance levels for satisfaction attributes. Another approach is to develop indexes that combine importance and satisfaction levels. In each case, there is an underlying assumption that given two dimensions of quality with equal levels of satisfaction, managers should focus on the one that is “most important.”

The use of such methods leads to some fundamental questions about measuring importance. These include: What is the best way to quantify importance? How much value does management derive from relying on importance to set priorities?  Researchers frequently rely on “derived” measures of importance, which are based on some measure of statistical association between a satisfaction dimension and a summary measure, such as overall satisfaction with a service.  For example, a regression analysis might reveal that on average, a one point improvement in the rating of “speed of service” translates into a 1.5 point improvement in the rating of “overall performance.” Derived measures have certain benefits and drawbacks compared to asking importance directly in a survey, but in our experience, the main reason for their widespread use is questionnaire length. Asking for importance ratings on every attribute of quality measured in a survey usually results in doubling the length of a questionnaire, adding to cost and burden.

We sense that there has recently been much interest and concern over how importance is measured, since we have been asked to conduct studies by different clients to validate various methods for quantifying importance. In these studies, we have measured importance in at least two ways: (1) deriving ratings statistically, and (2) administering a comprehensive survey that measures importance on every attribute (called “self-explicated.”) In our analysis, we create indexes that utilize importance measures in different ways to approximate satisfaction. For example, one index may consist of summing all satisfaction measures without any use of importance ratings, another might weight importance by “self-explicated” measures (i.e., multiplying importance ratings from a question in a survey by performance ratings), and another might weight importance by “derived” measures. It is assumed that the more valid importance measure will have a greater ability to predict and explain overall satisfaction, loyalty, customer retention intent, etc.

We believe that the results to such an analysis will vary by the service and specific survey approach, but we have uncovered some interesting results in some of our analyses. One important finding is that statistically derived measures and stated importance measures do not appear to be much different in predicting satisfaction. The implication is that managers and researchers need not be too concerned about the fact that importance is not measured in their satisfaction survey (although such information has other uses, such as segmenting customers on the basis of needs).

A more startling finding is that the inclusion of importance information does not always improve the ability to explain satisfaction — in other words, adding up the ratings in a survey can be just as predictive of satisfaction as weighting the ratings by importance. What does this finding mean for researching and reporting customer satisfaction? One possible implication might be to include a different variable from importance for setting priorities. One idea is to plot satisfaction by “ease/cost of incremental improvements” rather than importance. Given equal satisfaction levels on two dimensions, the greatest priority might be given to the one that consumes less resources to address.

Our analyses suggest that a more sophisticated, non-linear approach to combining satisfaction measures and importance can result in a model with greater ability to explain satisfaction. However, such models fail to mirror the simplistic approach managers use in Quadrant Analysis (and related methods) to make decisions. In developing a more valid model for explaining satisfaction using importance ratings, it is key that a different analysis/decision-making approach also be used. One example is to rely on simulations, which produce estimates of improvement in satisfaction levels based on assumed inputs in service dimensions.

For more information, contact Gina Woodall, President at 703-757-5213 ext. 11 or gwoodall@rockresearch.com, or Charles Colby, Chief Methodologist and Founder, at 703.757.5213 ext. 12 or ccolby@rockresearch.com.

Leave a reply

Featured Blog Posts
Most service-driven businesses, and even non-profits, possess a trove of data about customers that can drive CRM tactics to increase sales and loyalty
Today’s marketers have access to a wealth of information about their customers and prospects, providing an opportunity to communicate and satisfy ne
In the wake of the international kerfuffle about U.S. government spying, American companies are coping with a backlash from global clients concerned a
I recently saw an astonishing fact about the market research industry.  According to the annual “Top 50 Report” by Jack Honomichl[1], 24 of the 5
In recent years, "big data" has become a popular term, especially among marketers. In short, big data refers to challenges presented by the expon