Sunday, March 3, 2013

The Creep of Subjectivity

When I worked at a market research company, our watchwords were accuracy and trendability (ensuring that results gathered at different time points were comparable).  Companies paid us to give them timely, real information about beliefs, attitudes, and behaviors toward their brands and competing brands.  Although I felt all kinds of pressure from clients, some of which could compromise the quality of information we provided (the main one being that they wanted their information to both accurate and fast), I never felt any pressure/incentive (internally or externally) to slant research results in any way.  Clients were not paying us to be their cheerleaders; we existed to tell them how things really are (from the consumer perspective).  In the case of my primary client, the news was bad and got worse over time.  I well remember finishing an analysis and seeing that consumers would be willing to pay more money for a no-name product than for my client's product, an indication that their brand equity had turned negative.  I felt a bit of "man, sucks to be you guys!" but no qualms at all about reporting these findings to the client.  To this day, I feel good about contributing in my small way to the company's realization that they could not continue what they were doing and their consequently making radical changes that turned their company around.

Working in government was more of a mixed bag.  For the majority of projects, accuracy and rigor were called for and respected, but there were occasions where this did not hold.  One time, we were forced by the legislature to collect and report data that was just stupid, and the director of the division called me into her office to apologize personally for having to tell me to do it -- she said that even she knew that the project was entirely bogus and worthless as research and that she knows it must feel like a violation of my professional ethics to even do it.  (Surely the quality of research in government was notably reduced on the day that politicians became aware of the existence of the free, online SurveyMonkey software.) 

Most notably, the last big project I worked on (and that was in part responsible for my being tired of the job and leaving the organization) was an evaluation of a new outreach program for kids (I can't believe I haven't blogged about this before but it seems I haven't).  A new program that my agency really, really wanted to work.  In retrospect, I know how I should have handled this situation: by saying, "I absolutely agree that evaluation of the project is a critical component.  It's my strong recommendation that we put together a request for proposal to find an outside company with significant evaluation experience to spearhead this evaluation and ensure that it is accurate, fair, and unbiased."  We even had enough funding from a grant that we could have paid for good professional help on this matter (not typically the case for us).  Instead, we did the evaluation ourselves, and when the project turned out to be a clusterfuck, shit rained down on the evaluators (i.e., me) rather than the people who did not even carry out the project in the way they were supposed to.  You can't evaluate something unless you actually do it first, you know.  In retrospect, my evaluation would have read:

"Process Evaluation:  The [name] Project was not implemented according to the [whatever] guidelines agreed to by all parties on [date].  [With details of where the implementation failed and original guiding documents attached.]

Outcome Evaluation:  Because the [name] Project was not implemented in accordance with its guidelines, we cannot measure the impact and effectiveness of the program as it was designed and intended."

(Remember this example of bad, wish-driven research in that field that found me almost sputtering with righteous anger over not just the shoddy research itself, but by the arrogant asshole response by the offending party to a person who dared question their conclusions?  Good times.)

Given this experience -- being dispassionate and not really caring all that much what result you find = ethical, accurate research; being hugely involved in the subject and caring strongly that the results turn out one way instead of another = bad research -- I don't know why I thought that becoming a scientist was a good idea.  (OK, it's not really a surprise -- I was thinking of a lot of other things and I was wrong about many of them.)  I guess I really held to the common, naive view that scientists are interested in uncovering the truth.  I expected that doing science would be like doing market research only more theoretical, more interesting, and with greater freedom (HAH!).  There's a great critique, that I can't put my hands on right now, that scientists are not objective truth seekers but instead are like lawyers -- they attempt to make winning arguments for their case.  (This position seems to be pretty much accepted by all the scientists in social psychology and consumer behavior that I know to have an opinion on the matter.  Multiple faculty members I know appeared to respond to this critique with a variant of, Yep, so you better go for it.) 

One of the best classes I took in my PhD program was on research methods in social psychology, in which we read a hundred papers that discussed what the fuck is wrong with the experimental and statistical methods being done in empirical psychology and related fields (e.g., medicine).  It was wonderful, thought-provoking and eye-opening, and I left the class with the feeling that I was now a part of this dirty, tainted, messed up enterprise and not really sure how to proceed.  I did not feel that I was at any great risk of becoming a Diederik Stapel-esque fraud -- flat making up data on a massive scale (my favorite perhaps being when he claimed to have collected data from a high school that does not even exist).  But it did feel that there was a slippery slope here that one could go down without even really trying, particularly given the "sloppy research culture" in the field and the increasing importance and difficulty of publishing papers in top journals that have come to expect, as my thesis advisor put it, "too much of the data." 

One of the best things about leaving academia is not being in the shitty position of:

(1) Staying ethical as I see it (despite feeling all kinds of pressure to take little short-cuts and engage in questionable practices that people in the field conveniently do not see as cheating) and finding it frustrating/difficult/impossible? to get published and hence, be a scholarly failure... or try to work 100 hour weeks (compared to other people's 80) to run even more studies and still probably be a (relative) scholarly failure.

(2) Starting to see the short-cuts and questionable practices as necessary evils, required for competing for journal pages on an equal footing with everyone else.  (Robert and I both thought of the situation facing racers in the Tour de France on this issue.)

(3) Starting to see all those things as reasonable, rationalizing and justifying my behavior as just part of the way the work in the field is done, not really so bad, not compromising the quality of the work or my professional integrity.

(4)  Starting to engage in these behaviors without really even being consciously aware that I'm doing so.  I'm just eliminating some outliers (not cherry picking data to conform to my hypotheses).  I'm just refining my stimuli (not re-running the same experiment over and over until I get the results I want).

Bottom line, I'm really glad that I'm not facing the serious risk of becoming a scientific creep of subjectivity.

No comments: