Management Knows Best: Or Do They? Outcomes from the
real world. Introduction:
Surveys have been around since at least the beginning of recorded history. Today many organizations use surveys to
assess job satisfaction, reaction to a planned change in a product or service,
and customer satisfaction, to name a few applications. While there have been advances in both the
design/administration and analysis/interpretation of surveys, too many are
haphazardly done and yield little more than ‘interesting’ outcomes. Debacles like the Literary Digest
survey in 1936 forecasting an Alf Landon landslide win over Franklin Roosevelt,
or Gallup’s 1948 prediction that Thomas Dewey would easily win over Harry
Truman, or the decision based on extensive consumer surveys by Ford to make the
Edsel, have had little impact on deterring poorly designed and executed
surveys. In some regards surveys
suffer the same fate as interviews-- most people think they can do them
without ever having any training, and have high confidence that their outcomes
are meaningful. Encouragingly there is an increase in professionally
done, psychometrically sound surveys that result in reliable, actionable
data. These are characterized by: •
properly drawn
samples, •
effective items
with realistic response alternatives, •
pilot testing to
ensure accurate end-user interpretation, •
outcomes linked
to other variables, such as financials, etc. In job satisfaction surveys, a common
analysis has been to look at the management versus non-management sample. Over the years these studies repeatedly
indicate management tends to be more satisfied than non-management on most job
related issues. The ‘scientific’
reaction to this outcome for most practitioners is a “no duh.” But lest I
digress that is NOT the focus of this article.
Rather than job satisfaction, this looks at customer
satisfaction and how well organizations predict it- with a twist. Specifically, we wanted to
compare how well management could anticipate what their customers’ level of
satisfaction was compared to what non-management thought it would be. Background: Typically
when we do customer satisfaction surveys we conduct a parallel internal survey
designed to measure what the client
anticipates the customer will say.
This allows us to evaluate not only the key message from the customer,
but also how well the client knows the customer. Actually we take this same approach with our
3600 tools as well. It
quickly minimizes the “I knew that” reaction to feedback data and helps focus
on not only the message but the significant gaps where the message was truly
not anticipated. Further, obtaining an
internal measure of what they anticipate the respondent will say avoids the common
response of devaluing the data because it is ‘not accurate’ or ‘cannot be right
because here are the facts...’ The real value to
any survey is gaining insight into someone else’s perceptions. Knowing the “facts” without understanding an
individual’s perceptions and what drives them, normally will not result in the
outcomes you are striving for. Real World outcomes: We recently completed an annual customer
satisfaction survey program for an international client. The survey was primarily administered via the
web and responded to electronically, hosted on one of our secure servers.
[Small digression: This was a change for this client. Previously we had done their surveys using
traditional hard copy media and first class mail or airmail. We found return rates were slightly better
for electronic versus paper versions, but in both media the return rates
consistently ran at twice industry average.
We attribute that to careful piloting and our administration process. Non-significant differences were found on
item response characteristics when comparing paper to electronic for matched
samples. This is important to check
any time you have two different mediums.
In some applications we have found consistent differences that preclude
combining the data without appropriate adjustment.] As with prior surveys for this client, the
instrument was available in half a dozen languages in parallel forms, content
adjusted for cultural differences and to ensure close if not identical
interpretation by the end-users. Overall results documented
significant differences in response patterns based on the location of the
respondent, specific customer group and product line, as it had in the
past. Looking at the big picture, both
management and non-management’s estimates were highly correlated with that of
their customers’ responses (.810 and .910, respectfully, significant at the .01
level). That is, this client continued their
trend of knowing the direction of their customers’ satisfaction across the
broad spectrum of customer satisfaction components measured. However there were numerous differences
between the management and non-management data that were not expected. Considering all items on the
survey, management underestimated customer satisfaction by about 10% while
non-management overestimated customer satisfaction by about 3%. This difference becomes more dramatic when
you look at individual items.
Management’s estimate of their customers’ satisfaction was five or more
percentage points lower than the actual customer rating on over 50% of the
survey items, as compared to only 15% of the items for non-management. Conversely management overestimated by five
or more percentage points what their customers actually said on 21% of the
items as compared to 38% for non-management. C This of
course is a plausible explanation but we felt there was a more problematic
issue raised by the data. Since
management was reasonably consistent in their more negative perception of their
customers’ level of satisfaction, this permeated the dialog and interaction
they had with non-management, for example in department meetings, setting
targets, etc. At the same time
non-management felt that management was unrealistic or out of touch with the
customer. This resulted in an
underlying friction. Similarly
management’s perceptions impacted their negotiating style in meeting with
customers. They tended to be less
aggressive because they felt the customer was less satisfied then they were. This client has a history of
taking the survey outcomes seriously and setting action plans in place to
achieve improvements. For example, data
on technological competence came in lower than they expected one of our earlier
surveys. They took specific steps based
on the feedback and increased their customers’ perceptions of their
technological capabilities by more than twenty percentage points in less than
two years. Similarly they used one
client’s positive outcomes to go back and increase the amount of work they were
sourcing to them. They are taking a
similar focus with the outcomes of this year’s survey. Also management is now working on ways to
maintain a more accurate picture of their customers’ level of satisfaction (in
addition to the annual survey). We are
in the process of conducting and analyzing new global customer satisfaction
surveys to see if this management non-management difference is replicated. |