Saturday, April 12, 2008

Or Why Psychologists Should Take More Math

I've been putting off writing about this because there is so much reading up I could do before commenting. But since I am unlikely to have a lot more time to do that any time soon, I figured I might as well get started.

Via Marginal Revolution, I read this interesting NY Times story in which an economist argues that an entire line of psychological research into (post-choice) cognitive dissonance is compromised by a methodological error. (Note: you really do need to read the article for any of the rest of this to make sense. But it's written for the layman, is pretty short, and has drawings of M&Ms in it. Check it out.)

The abstract to the working paper (it has not yet been peer reviewed & published) reads:

"Cognitive dissonance is one of the most influential theories in social psychology, and its oldest experiential realization is choice-induced dissonance. Since 1956, dissonance theorists have claimed that people rationalize past choices by devaluing rejected alternatives and upgrading chosen ones, an effect known as the spreading of preferences.

Here, I show that every study which has tested this suffers from a fundamental methodological flaw. Specifically, these studies (and the free-choice methodology they employ) implicitly assume that before choices are made, a subject's preferences can be measured perfectly, i.e. with infinite precision, and under-appreciate that a subject's choices reflect their preferences. [In other words, ignoring the economist's beloved "revealed preferences" demonstrated by the person's choice and over-trusting that the ratings that people give.-sally]

Because of this, existing methods will mistakenly identify cognitive dissonance when there is none. This problem survives all controls present in the literature, including control groups, high and low dissonance conditions, and comparisons of dissonance across cultures or affirmation levels.

The bias this problem produces can be fixed, and correctly interpreted several prominent studies actually reject the presence of choice-induced dissonance in their subjects. This suggests that mere choice may not be enough to induce rationalization, a reversal that may significantly change the way we think about cognitive dissonance as a whole."

Bottom line: I believe that Chen's math is correct. I think he was clever to see the connection to the Monty Hall problem. I think it's great that he and a psychologist colleague are testing a new free choice methodology that is intended to correct for the measurement problem and will be interested to see what results from it.

However, when he says that control groups do nothing to mitigate against the measurement problem, I'm not sure that I agree. It's not obvious to me that findings of differential post-decision "spread" across various treatment and control groups can be explained away with his analysis.

It's one thing to say that you would expect, based on the probabilities alone, for 2/3 of subjects to choose green M&M's over blue, and that when you get that result, it doesn't give you evidence of dissonance. This argument is saying that the experimenter's baseline expectation that only 50% of subjects should choose green was mathematically invalid.

But when you put people in different experimental situations, some of which are expected (based on dissonance theory) to cause a greater amount of change in pre- and post-decision ratings than others, and you find those differences between groups who were treated differently, it seems like there is something else going on. Shouldn't this measurement problem affect all groups equally?

Of course, it's possible that he does not intend his criticism to extend to those kinds of experiments, just as he does not criticize the many other experimental paradigms used to study dissonance effects. To the degree this is the case, the criticism is fairly trivial.

I also would just like to say that I really like the Monty Hall problem. Part of this, of course, is that I get to feel smart because my intuition steers me right in choosing the high-probability door. My first reaction to the problem was: "Choose the other door. But why? Well, because I was probably wrong when I picked this door. My choice probably forced him to pick the door he did, because he has to pick a door without the car behind it. Thus the car is probably behind the door he didn't pick." I was only later able to work out the 1/3, 2/3 math.

But I wonder if many people do not immediately understand the significance of the fact that Monty knows where the car is and has to act accordingly. It seems like they treat Monty's choice of door as though it was as random as their own. I guess I'm saying that I hypothesize that if you were to vary the emphasis put on the fact that Monty knows where the car is in your description of the problem, people in the high-emphasis condition would be more likely to get the problem right than those in a low-emphasis condition. But I would not be surprised if many people were not helped by that information. It's a confusing enough problem that apparently many people adamantly maintain that the probabilities remain 1/2, 1/2 despite all explanations and do not even trust the simulators that hope to show people that it works out empirically.

2 comments:

Anonymous said...

So, my take on this is that it is not necessarily true that cognitive dissonance is the reason for the results of such testing. It can be somewhat determined by mathematics, which muddies the waters.

Sally said...

Mom, right - that's the impression I get.