Thursday, April 12, 2007

Altered Survey Controversy in Austin

It's not often that I have the opportunity to discuss a scandal in the customer satisfaction survey business, where primary controversies surround such thrilling issues as whether 7 point Likert scales can be treated as interval level data for the purposes of running parametric tests. The local newspaper reports that the head of the Austin Convention Center Department was fired last week for falsifying the results of a customer service survey; apparently the department had an annual bonus structure that used the results of the survey as a measure of performance. That's a sort of bizarre use of a c-sat survey anyway, especially one that is administered and analyzed in-house; I wonder who it was that put that mechanism in place. I was very surprised to read that they had bonus money totalling over $400,000 for about 200 employees. Where would the money have gone if they hadn't gotten their bonus? The survey itself looks fairly mediocre (though not criminally so); I'm particularly fond of the declarative sentence of a question that reads, with great vagueness and passive voice construction:

12. Your comments are valued. (Use reverse side for additional comments if needed.)

But fear not: there is, so far as I can tell, absolutely nothing riding on the customer service survey that I was putting together today for my agency. In the fall, we will send in our report to the state and somebody will put a big check mark next to the task "Agency submitted customer service survey results [of questionable validity and little significance]" and that will be the end of it.

In my previous life, I had a client company (Who Will Not Be Named) that used the weighted results of their multi-country, cross-platform awareness and purchase surveys to allocate yearly bonuses among their various regional directors and product managers. I still cannot believe that I did not murder on the spot the programmer who one year came to me after I had calculated and sent the final spreadsheet of results to the client and said, oops he had kind of screwed up some of the labeling of the European countries.

2 comments:

Tam said...

Speaking of customer satisfaction, one of my professors (yes, that one) was griping to me the other day about our school administration. Despite the fact that a lot is known empirically about student satisfaction surveys, not only are their known "issues" ignored by the admistration, but they are not even treated as measures of student satisfaction - they are apparently treated as direct measures of effectiveness in teaching.

You can see how this might make a professor want to tear his eyes out. As he pointed out, if the administration goes hardcore on this, professors do know how to push those in a positive direction...just not necessarily by improving the quality of the education they provide. Duh.

I gave the example of my Philosophy professor, who was an incredibly gifted stand-up-comedian-style lecturer, taught a pretty easy class, and was good-looking on top of all that. I know (because I look up past professors) that he gets great student survey numbers. I also know he was very sloppy about the actual information content of the course.

Sally said...

Yes, that must be a very frustrating situation for him. But I feel confident that he will not react to this perverse incentive and give everyone in your class an A.

It's a funny thing - in many cases, you can make a much stronger claim that customer satisfaction is what you (or your organization/company) is trying to create, but this is a particularly poor metric in the case of teaching.